1 Senstvty Analyss and Neural Networks 1 Senstvty Analyss and Neural Networks * ** * Ray Tsah, Hsou-We Ln, Y-ng Ln Abstrat Ths study resents the methodology of senstvty analyss and exlores whether t an be an alternatve evaluaton rteron as well as a tool to read artfal neural networks knowledge. The smulaton of the Blak-Sholes formula s emloyed for ths objet. Sne, n the Blak-Sholes formula, the mang relatonsh between the all re and fve relevant varables s a mathematally lose form, t s feasble to verfy the valdty of the methodology of senstvty analyss. The exerment results are romsng; they show that both values of the senstvty analyss and the artal dervatve of the Blak-Sholes formula are onsstent. Furthermore, the senstvty analyss an be an alternatve rteron for omarng the effetveness of ANNs. Keywords: Senstvty analyss, Neural Networks, Otons 1. Introduton and Lterature Revew Ths study adots artfal neural networks (ANN) to smulatng the Blak-Sholes formula to re the all otons. Smlar researhes had been done, but they foused manly on the erformane omarson wth some statst models, and had no further analyss of the ANNs. Here we resent the * Det. of Management Informaton Systems, Natonal Chengh Unversty ** Det. of Internatonal Busness, Natonal Tawan Unversty, Tawan
2 methodology of senstvty analyss, whh an exlore the knowledge embedded n ANNs, to see whether the ANNs are atually well traned and vald. The objet of ths study s to nvestgate whether the senstvty analyss an be as an alternatve evaluaton rteron as well as a tool to read ANN s knowledge. Ths s feasble sne the ANNs are traned to smulate the Blak-Sholes formula n whh the mang between the all re and fve relevant varables s a mathematally lose form. There are two ANNs: the mult-layered feed-forward (MLP) networks wth the Bak Proagaton learnng algorthm (BP) (Rumelhart et al., 1986) and the RNBP learnng algorthm (RNBP) (Tsah, Chen, & Ln, 1998). The erformane of BP and RNBP are measured and omared based on two rtera, the learnng effeny and the foreastng error. When the ANN s used as a modelng tool, t s nterestng to hek f the ANN s well traned and f t an dslay some useful nformaton about the task. For the frst queston, we mght smly test the ANN wth a huge amount of data. If the erformane of the ANN s aetable wthn a redetermned tolerane, t s omaratvely relable to lam that the ANN has been well traned. As for the seond queston, t s neessary to have a deeer analyss of the network struture, analyss that n fat s rather omlated mathematally. For examle, t s desrable to be aable of sefyng the relatonshs between nut and outut varables. The senstvty analyss, whh s smlar to the fator analyss n statsts, s roosed to examne the mat of eah nut varable. When the model had been omletely understood, the senstvty analyss an be utlzed to examne whether the haraterst of eah (nut) varable n the network system has been well traned. On the other hand, when we are not sure about how they nterat wthn a system, t s aable of exlorng the omlexty of ts senstve urve, whh orresonds to the senstvty of the outut value to the varaton of eah (nut) varable. The exerment results are romsng. Both values of the senstvty analyss and the artal dervatve are onsstent. Furthermore, n both senstvty analyss of ANN and artal dervatve of the formula, the stok re and the strke re are the most determnant fators to the all re, omared wth the other varables, the rsk-free nterest rate, the tme to exraton, and the volatlty. Also, n both senstvty analyss and artal dervatve, the stok re ostvely affets the all re and the strke re negatvely affets the all re. In the followng, we frst revew the
3 Senstvty Analyss and Neural Networks 3 relevant work. In seton, we desrbe our exerment desgn. The erformanes and analyses of the exerments are resented n seton 3. In seton 4, we summarze the lessons learned from ths study, as well as the future work The Prng of Oton Otons on stoks were frst traded on an organzed exhange n Sne then there has been a dramat growth on otons markets. Otons are now traded on many exhanges throughout the world. Huge volumes of otons are also traded over the ounter by banks and other nsttutons. The underlyng assets nlude stoks, stok ndes, foregn urrenes, debt nstruments, ommodtes, and future ontrats. There are two bas tyes of otons: a all oton gves the holder the rght to buy the underlyng asset by a ertan date for a ertan re; whereas a ut oton gves the holder the rght to sell the underlyng asset by a ertan date for a ertan re. The re n the ontat s known as the exerse re or strke re; the date s known as the exraton date, exerse date, or maturty. Ameran otons an be exersed at any tme u to the exraton date. Euroean otons an be exersed only on the exraton date. Aordng to (Blak & Sholes, 1973), derved based on the no-arbtrage ondton and other assumtons, the rng model for the Euroean all oton an be exressed as followng: RT d 1 Ke N d C SN (1) Wth d d 1 d 1 S K R ln 0.5 T and T T ; where C s the re of a all oton, S the stok re, K the strke re, R the nterest rate, T the maturty, the volatlty of stok return, and N(x) the umulatve normal dstrbuton densty funton. Ths formula has rovded a great ontrbuton to the oton market for a long tme and many advaned analyses for ths model have been ntrodued. The frst order artal dervatves of the Blak-Sholes formula are well defned; for readers who are nterested n the detaled exlanatons of them, lease refer to (Hull, 1997). Those artal dervatves wll be used a benhmark for the valdaton of the ANN through senstvty analyss. 1.. Alatons of artfal neural networks n rng of otons Alatons of ANN to rng the otons have been exlored a lot reently, for examle, (Huthnson et al., 1994; Lajbyger et al., 1996, and Hanke, 1997). All of them aled BP to smulate the Blak-Sholes formula.
4 4 Huthnson and hs olleagues adoted the Monte Carlo smulatons to rodue a two-year samle of daly stok res and reate a ross-seton of otons eah date aordng to the rules by the Chago Board Otons Exhange wth res gven by Blak-Sholes formula. For omarson, they estmated models usng four oular methods: the least square method, the radal bass funton neural networks, BP, and the rojeton ursut. Ther results showed the erformane of BP was not sgnfantly better than statstal models. Note that, n ther study, the values of the strke res (K) and maturty (T) needed to be fxed n eah Monte Carlo smulaton. Lajbyger and hs olleagues set u a BP wth 4 nuts (the rato of the stok re over the strke re, maturty, nterest rate, and volatlty) and one outut (the rato of the all re over the strke re). The ranges of the nut varables were as follows: the values of S/K were n the range [0.9, 1.1], the values of T were n the range [0.0, 0.], R was the rsk free nterest rate, and was the volatlty of the underlyng future. They lamed aroxmately 54% of the real data falls wthn those ranges. They omared BP s results to statstal lnear regressons, and argued that, n suh ranges of the nut varables, BP s erformane was sgnfantly better than those of statstal methods. Hanke nororated GARCH(1,1) model and stohast volatlty nto BP networks, whh had 7 nut nodes, 50 hdden nodes and 1 outut node. Hanke adoted the GARCH(1,1) for addtonal nformaton regardng the urrent volatlty. He merely resented the devatons from the target values BP and RNBP (Rumelhart et al., 1986) resented BP; sne then, BP has been wdely used n many felds. There are, however, notorous redaments when usng BP; for examle, the unknown of the roer number of hdden nodes, the relatvely otmal learnng result, and the sluggsh learnng roess (Tsah, 1993). Many modfatons of the orgnal BP has been resented (Sarkar, 1995); for examle, the momentum strategy, the adatve learnng rate (Takeh et al., 1995), the self-adatve bak roagaton (Jaobs, 1988), the ontrollng osllaton of weghts, the resalng of weghts, the exeted soure values, the adatve learnng rates, the onjugate gradent method (Battt, 199), and the dfferent error funton. But none of them rovde a generalzed soluton for the undesrable redaments of BP. (Wang, 1995) argued that the unredtablty was the bggest roblem of BP, and more nformaton or ror knowledge of the ase an rovde more meanngful lassfaton boundares for the network strutures. To address these notorous redaments
5 Senstvty Analyss and Neural Networks 5 of BP, Tsah has develoed Reasonng Neural Networks, whh s an MLP network wth the reasonng learnng algorthm (RN) (Tsah, 1993; Tsah, 1997; Tsah, 1998). In summary, RN guarantees an otmal soluton for -lasses ategorzaton learnng roblems. At ths ont, however, RN s desgned to deal only wth bnary outut atterns. When workng wth non-bnary oututs, real numbers an frst be onverted nto bnary dgts. However, ths nreases the number of outut nodes and the learnng omlexty, requrng a longer learnng tme. Thus, Tsah has further develoed RNBP (Tsah, Chen & Ln, 1998), whh an deal wth the non-bnary outut atterns. As stated n (Tsah, Chen & Ln, 1998), RNBP sgnfantly outerforms BP n the effetveness regardng the testng data set, whle both of them erform smlarly n the effetveness regardng the tranng data set. Therefore, we also adot RNBP n our researh. A bref summary about RNBP s resented n the followng two aragrahs. 1 The man dea of RNBP s to utlze the followng redts of the learnng algorthm of RN: the ablty of autonomously rerutng as well as runng hdden nodes durng the 1 For the detals of the RNBP, the readers an be referred to [Tsah, Chen & Ln, 1998]. learnng roess, and the guaranteeng of obtanng the desred soluton for the -lasses ategorzaton learnng roblem. Wth those redts as well as the fat that both RN and BP an be aled to the MLP network, RN may aqure some useful nformaton for BP, for examle, a roer amount of used hdden nodes and the well-assgned (ntal) weghts. Wth reset to the alaton roblems of the outut values beng real values, we frstly lassfy the tranng data nto two ategores va a rule of thumb. For examle, 1 f d d () 1 f d where d s the th (real) desred outut value, s the mean value (or the medan) of d s, and d s the orresondng desred outut value for RN s learnng algorthm. In other words, eah outut value wll be relaed by the assoated bnary dgt. Suh data are used as the tranng atterns for RN s learnng algorthm. Then we adot the network obtaned from RN s learnng, and uses BP to learn the orgnal tranng data the senstvty analyss (Yoon et al., 1994) argued that, after buldng the ANN, readng or understandng the knowledge n ANN was dffult beause the knowledge was dstrbuted over the entre network. However, the senstvty analyss of the ANN s neessary not only for a better
6 6 understandng of the mang between nut and outut varables n the alyng domans, but also for the further researh of the ANN tself. Table 1 Lst of aers relevant to the senstvty analyss referene Yoon, Gumaraes, & RS Swale (1994) lj ABS w r Nammohasses, Barnett, w j r l j l formula Green, & Smth (1995) Sjl rl wjhb, X 1 hb, X B & 1 1 S ABS S jl B n Steger & Sharda (1996) ABS g lj RS lj ABS g Chou, Lu, & Tsah (1996) Slj rl 1 h B, X 1 From the lterature revew, there were some related studes. Table 1 shows some relevant lterature revew; the exlanatons of symbols lease refer to Table and the followng aragrahs. Wthout resentng the dervaton of the methodology, (Yoon et al., 1994) roosed a way of roflng the mat of eah nut varable. For a network wth one hdden layer, t nvolved omutng a test statst of the form: wjrl RS lj (3) ABS w r j l RS lj was the relatve strength between the jth nut and the lth outut varable, and ABS lj w j meant the absolute value. Ths statst measured the strength of the relatonsh of the jth nut and the lth outut varable to the total strength of all of the nut and outut varables. It was smlar to the multvarate analyss. (Nammohasses et al., 1995) defned a senstvty matrx for nuts and oututs vetor arrays over the tranng atterns : S B rl wjhb, X 1 hb X jl, 1 1 S ABS S jl B (5) n (4) That s, the total senstvty S was derved by alulatng the statstal sgnfane of the
7 Senstvty Analyss and Neural Networks 7 ontrbuton due to eah ndvdual nut, S jl (B ), over all tranng atterns. Nammohasses et al. were nterested n lottng the senstvty funton of tranng eohs, whh ould gve trends of relatve nut senstvty. Table Lst of Symbols m,, and q : the amounts of nut, hdden, and outut nodes, resetvely. B : the th gven stmulus nut attern, =1,,, k. b j : the nut value reeved by the jth nut node when B s resented to the network. w j : the weght of onneton between the jth nut and the th hdden node. w : the vetor of weghts of the onnetons between all nut nodes and th hdden node; w (w 1, w, w m ). : the negatve of the threshold value of the th hdden node. X t (, w t ) and X t (X t 1, X t,, X t ). h(b, X ) : the atvaton value of the th hdden node gven the stmulus B, and m h(b, X ) tanh wjbj j1 h(b, X) : the atvaton value vetor of all hdden nodes gven the stmulus B, and h(b, X)(h(B, X 1 ), h(b, X ),, h(b, X )) t r l : the weght of the onnetons between th hdden nodes and lth outut node. r l : the vetor of weghts of the onnetons between all hdden nodes and lth outut node; r l (r l1, r l,, r l ). s l : the negatve of the threshold value of the lth outut node. Y t l (s l, r t l ), Y t (Y t 1, Y t 1,, Y t q ), and Z t (Y t, X t ). net(b, Y l, X) : the net nut value of the lth outut node gven the stmulus B, and net(b, Y l, X) tanh sl rlhb, X 1 O(B, Y l, X) : the atvaton value of the lth outut node gven the stmulus B, and O(B, Y l, X) tanh(net(b, Y l, X)) d l : the desred outut value of the lth outut node when B s resented to the network. (Jarvs & Stuart, 1996) adoted the senstvty analyss to exlore the effets of alterng networks arameters on the tranng tmes and the lassfaton auray. (Steger & Sharda, 1996) alulated the relatve senstvty of nuts to the network by wgglng eah nut value. The effet of eah wggle on every tranng attern was determned, and the overall average absolute dfferene between the modfed oututs and orgnal oututs was alulated. The nut senstvty was the relatve rato of the overall average absolute dfferene over all nut varables. It seems ther mathematal model
8 8 was as followng: ABS glj RS lj (6) ABS g where RS lj s the relatve senstvty between the jth nut and the lth outut varable, lj glj s the average value of the effet by wgglng the value of eah nut. In (Chou et al., 1996), the senstvty fator of eah nut varable was defned as the sum of artal dervatves of all tranng atterns as: S lj 1 r l 1 h B, X w (7) S lj was the senstvty fator between the jth nut and the lth outut varable. Some senstvty analyses desrbed n Table 1 rovded a gross ndator of key fator va measurng the effet of alterng an nut varable on the outut value by ntegratng over all nut atterns. However, they gnored the otental nteraton between two or more nut atterns. To summate them robably neutralzes ther nteratons. Here we modfy the senstvty fator derved n (Chou et al., 1996). The modfed senstvty fator does not summate over all tranng atterns, and s defned as followng: S lj B O B, Yl, X b j j S lj O B, Yl, X net l dh net (8) net B SljB 1 l h dnet b 1 O B, Yl, X rl 1 h B, X w (9) j 1 where net l orresonds to the net nut varable of the lth outut node, h orresonds to the atvaton varable of the th hdden node, and net orresonds to the net nut varable of the th hdden node. The relatve senstvty fator s defned as follows: R lj B q k1 q k1 S lj B ABS S r l 1 ABS lk 1 h l 1 B r 1 h B, X j w B, X j w k (10) It s feasble to lot together the R lj (B ) values and the artal dervatves of a lose form equaton of all tranng atterns, and make the omarson. For examle, suose an equaton s defned as: f 1 (11) w, x, y, z w x 4y 10z where w, x, y and z are ndeendent varables. We use the RNBP network wth four nut nodes and one outut node, and there are totally 400 tranng atterns, whh are generated randomly wth the value of eah varable beng ranged from -0.5 to 0.5.
9 Senstvty Analyss and Neural Networks 9 The mean values of w, x, y and z over those 400 tranng atterns are 0.034, , , and , resetvely. After fnshng the learnng, the R lj (B ) values of all tranng atterns an be alulated from the methodology desrbed above, and then omared to the artal dervatve values of f(w, x, y, z). Let s take the y as the llustraton. Fgure 1 shows the values of y, the orresondng values of f y, and the orresondng R lj (B ) values derved from RNBP. It seems that RNBP s generalzng ablty s not good when the value of y s less than -0.4 or greater than 0.4. f y l 1 Fgure 1 Let S lj b j r w The urves of artal dervate and senstvty of equaton (11) onernng wth y.. 1 O B, Yl, X 1 h B, X j b,k j Sne the value of k.. 1 O, Yl, X 1 h B, X b,k j k ndeendent wth b j, B s l j lj j 1 R lj. q q k1 S b ABS S lkbk ABS k1 r w r w l 1 k (1) Take the above ase of f(w, x, y, z) as the llustraton, the R lj of w, x, y, z are , , and , resetvely; and the mean values of the R lj (B ) are , , , and , resetvely. Comared wth the mean values of artal dervatves of f onernng w, x, y, and z, whh are ,
10 , , and , the relatve magntudes of R lj or R lj (B ) are smlar to the mean values of artal dervatves of f, though the numbers are not the same. A larger R lj (B ) ndates that, gven a artular nut attern B, the lth outut s more senstve to the devaton of the jth nut, whle a larger R lj ndates that, n general, the devaton of the jth nut has a larger mat to the lth outut. Thus, R lj (B ) and R lj are the relatve mats of the jth nut to the lth outut onernng some nut attern B and a general nut attern, resetvely. Wth the defnton of the relatve mat, R lj (B ) and R lj, t s aable of alulatng the relatve average senstvty of the outut to eah nut deste the (nut) varables nterdeendeny. If the smulaton model s unknown, the relatve mat an be used to exlore the haraterst of eah nut. In addton, the relatve mat an be a tool for fators flterng. If the nut varables of an ANN are merfetly seleted or have nomlete nformaton, the relatve mat s helful for fndng a less relevant fator whose relatve mat s zero or tny.. Exerment Desgns and Methodology The nut data (S, K, R, T, ) are generated randomly n the range defned n Table 3, and the desred outut s the all res derved based on the Blak-Sholes formula. The atterns are searated nto two ategores: the n-the-money otons and the out-of-the-money otons. It s alled the n-the-money oton f the stok re s greater than ts strke re; and the out-of-the money oton f the stok re less than ts strke re. In order to study whether the ANN behaves dfferently uon those two ategores of otons, there are two exerments: one for the n-the-money otons and one for the out-of-the-money otons. Table 3 Ranges of nut varables of tranng networks Varable In-the-money Out-of-the-money Dstrbuton S Unform K 1.0 < S/K < < S/K < 1.0 Unform R Unform T Unform Unform Thus, there are fve nut nodes, eah of tranng atterns and 1000 testng atterns for whh orresonds to eah nut varable, resetvely, and one outut node, whh orresonds to the all res. There exst 400 both BP and RNBP. The networks are traned wth a random sequene of atterns; Fgures and 3 dslay the sorted and unsorted desred
11 Senstvty Analyss and Neural Networks 11 values of all res of the 400 tranng onernng the n-the-money otons and the out-of-the-money otons, resetvely. The average all res of the n-the-money and the out-of-the-money otons are 7.96 and 6.17, resetvely. Fgure The desred all res onernng the n-the-money otons Fgure 3 The desred all res onernng the out-of-the-money otons RNBP s aled reeatedly to the same hdden nodes are obtaned. Thus RNBP are data set wth dfferent learnng arameters and aled several tmes tll the varane of the nut sequene of the tranng atterns. As amounts of reruted hdden nodes s RNBP reeats, dfferent amounts of reruted aetable. From the exermental results,
12 1 twenty reeated smulatons of RNBP has a reasonable volatlty of the amounts of reruted hdden nodes. Therefore, for both exerments uon the n-the-money otons and the out-of-money otons, 0 reeated smulatons of RNBP and BP have been erformed. There are two BPs, whh have 4 and 1 hdden nodes, resetvely, and are denoted BP(1) and BP(), resetvely. The ntal weghts and threshold value are gven randomly from -1.0 to 1.0. The reason for we run BP() s that the average amounts of hdden nodes reruted over 0 RNBPs are near 1. It s desrable to make a far omarson between BP and RNBP based on a smlar network struture. The evaluaton rtera for the system erformane nlude the effeny and the effetveness. The urose of exlorng the effeny s to study whh one has a faster learnng roess, and the exlorng the effetveness s to study whh one has a better generalzaton ablty. For dslayng the effeny, the nformaton of the average amount of learnng teratons sent n eah ase s used. One of the stong rtera of the learnng s the tolerable error level, whh s set as 0.01; however, the uer bound of learnng teratons s That s, f the value of the total error an not onverge below the tolerable error level wthn 10000, the learnng stos. As for measurng the effetveness, the mean relatve error s used. The mean relatve error s defned as: dl O B, Y, l X l n a l (13) where a l s the atual oton res, the subsrt denotes the th testng data, and n s the amount of testng data. 3. Performane and Analyss 3.1. Smulaton erformane of BP and RNBP Table 4 dslays the summary results of smulatons. The averages and standard devatons of N RNBP onernng the n-the-money otons are very smlar to those onernng the out-of-the-money otons. Table 4 also dslays followng fats: (1) In eah exerment, the average and the standard devaton of T BP(1) and A BP(1) are qute smlar to those of T BP() and A BP(). () From the exermental results of T BP(1), T BP() and T RNBP, t seems that, n terms of the learnng effeny, t s more dffult for both ANNs to learn the out-of-the-money otons than the n-the-money otons. Furthermore, Both BP and RNBP erform better n the n-the-money otons than n the out-of-the-money otons.
13 Senstvty Analyss and Neural Networks 13 Table 4 Smulaton results of BP and RNBP. T denotes the amount of the learnng teratons taken n the BP art, A denotes the mean relatve error, and N s the amount of reruted hdden nodes. The n-the-money otons T BP(1) A BP(1) T BP() A BP() T RNBP A RNBP N RNBP Average 6 8.4% % % 1.35 Standard % % % 5.45 devaton Average CPU tme (seonds) The out-of-the-money otons T BP(1) A BP(1) T BP() A BP() T RNBP A RNBP N RNBP Average % % % 1.1 Standard devaton % % % 5.45 Average CPU tme (seonds) The n-the-money otons (T BP(1), A BP(1) ) (T BP(), A BP() ) (T RNBP, A RNBP ) (T RNBP, N RNBP ) (A RNBP,N RNBP ) orrelaton oeffent The out-of-the-money otons (T BP(1), A BP(1) ) (T BP(), A BP() ) (T RNBP, A RNBP ) (T RNBP, N RNBP ) (A RNBP,N RNBP ) orrelaton oeffent (3) Regardng the mean relatve errors, RNBP outerforms both BP(1) and BP() (-values are 1.1E-06 and 1.15E-06, resetvely, by T-test) n the out-of-the-money otons, although both BP(1) and BP() erform a lttle better than RNBP (-values are and 0.068, resetvely, by T-test) n the n-the-money otons. (4) The standard devaton of A RNBP s evdently greater than that of A BP(1) and A BP() (-values are nearly 0.0, by F-test) n both n-the-money and out-of-the-money otons. (5) The mean of T RNBP s sgnfantly less, but the CPU tme taken by RNBP s larger than the one taken by BP(1). Ths s beause RNBP adots muh more hdden nodes (1.35 and 1.1 n average, resetvely) than BP(1). More hdden nodes ause more tme omlexty. (6) A RNBP nreases as T RNBP nreases. Ths s reasonable sne some smulatons do not reah the tolerant error level wthn tmes. If they do not learn suessfully, ther
14 14 foreastng errors should be larger. (7) N RNBP s less relevant to ether A RNBP or T RNBP n both n-the-money and out-of-the- In-the-money Out-of-the-money B-S BP(1) BP() RNBP B-S BP(1) BP() RNBP Table 5 mean (standard devaton) of the artal dervatve values over all tranng atterns mean (standard devaton) of the R lj (B ) values over all tranng atterns money exerments. 3.. the senstvty analyss The results of senstvty analyss S K R T (0.0311) (1.05E-05) (0.0546) (1.56E-05) (0.0339) (1.16E-05) (0.078) (1.30E-05) (0.037) (9.3E-06) R lj mean (standard devaton) of the R lj (B ) values over all tranng atterns (1.81E-05) (3.10E-05) (1.6E-05) 0.09 (.45E-05) (1.60E-05) R lj mean (standard devaton) of the R lj (B ) values over all tranng atterns (0.1186) (0.131) (0.1075) (0.0564) (0.045) R lj mean (standard devaton) of the artal dervatve values over all tranng atterns mean (standard devaton) of the R lj (B ) values over all tranng atterns (0.1368) (0.009) (0.098) (0.0018) (0.0646) (0.0065) (0.0346) (0.0006) (0.0559) (0.001) R lj mean (standard devaton) of the R lj (B ) values over all tranng atterns (3.08E-05) (6.E-05) (.64E-05) (3.75E-05) (3.14E-05) R lj mean (standard devaton) of the R lj (B ) values over all tranng atterns (0.0546) (0.0466) (0.0603) (0.011) (0.0371) R lj Table 5 dslays the results of the senstvty analyss. The results n both ANN and artal dervatve values are onsstent. It shows that, n both ANN and artal dervatve values, S and K are the most determnant fators to the all re, omared wth the other varables, R, T and. Furthermore, n both ANN and artal dervatve values, S ostvely affets the all re and K negatvely affets the all re. Thus, the
15 Senstvty Analyss and Neural Networks 15 results of the senstvty analyss ndate that both ANNs have learned the haratersts of those fve nut varables. However, t seems that RNBP has learned the haratersts of those fve nut varables more suessfully than BP. Table 5 also dslays that the standard devatons of R lj (B) n BP are muh smaller, omared wth those n RNBP. Suh henomenon an be demonstrated more learly wth Fgure 4. It seems that, wth BP, the standard devaton of the senstvty of every nut attern s almost the same. Ths may be due to the saturaton henomenon haened n BP. BP adots the tanh funton as ts atvaton funton. Thus, f most weghts between the hdden nodes and the outut node are large, then, for any (nut) atterns, the magntude of the net nut to the outut node s lable to be too large suh that ts outut value wll saturate (near to 1.0 or 1.0). Ths s denoted as the saturaton henomenon. RNBP evdently does not have the saturaton henomenon. Furthermore, the dstrbuton of ts senstvty values s more reasonable, omarng to BP. C S C K
16 16 C R C T C Fgure 4 The senstvty urves Fgures 5 and 6 dslay the frequeny 5 s that, there s no ourrene of foreastng dstrbutons of the desred all res and the value on the range of all re ether less than foreast results of BP and RNBP regardng.0 or greater than 17. Lkewse, n Fgure 6, those 400 tranng atterns. Those fgures agree no ourrene of foreastng value on the range wth the revous onluson that BP erforms of all re greater than It seems that better than RNBP n the n-the-money otons, both ANNs are not well traned sne ther whle worse n the out-of-the-money otons. generalzaton ablty are defetve n some The most nterestng observaton of Fgure range of all res.
17 Senstvty Analyss and Neural Networks 17 Fgure 5 The frequeny dstrbuton n the n-the-money exerment Fgure 6 The frequeny dstrbuton n the out-of-the-money exerment 4. Summary and Future Work BP. On the other hand, RNBP evdently does The followng lessons have been learned not have the saturaton henomenon. from ths study: Furthermore, the dstrbuton of RNBP s (1) Table 5 dslays the fat that the standard senstvty values s more reasonable, devatons of R lj (B) n BP are muh smaller, omared wth those n RNBP. Ths may be due to the saturaton henomenon haened n omarng to BP. () The results of the senstvty analyss of RNBP n ths study are onsstent wth our
18 18 ror exetatons. The senstvty analyss an ndate the key fators whh ontrbute the most mats to the oututs. In summary, the senstvty analyss an be an alternatve rteron for omarng the effetveness of ANNs. Moreover, the senstvty analyss an dsover the knowledge embedded n ANN. Thus t s an effent tool for nformaton flterng and mnng n an unknown envronment. Although we have obtaned some robust results, the followng tos need to be further studed n the future: (1) The senstvty analyss an dsover the knowledge embedded n ANN. Ths s useful for artfal ntellgent agent n alatons, eseally n ths overmuh nformaton soety. One future work s to exlore further the ablty of the senstvty analyss n readng the knowledge embedded n ANN va alyng t to a real ratal roblem. () We have observed that there s no ourrene of foreastng value on the range of all re ether less than.0 or greater than 17 n Fgure 5; smlarly n Fgure 6, there s no ourrene of foreastng value on the range of all re greater than It seems that both ANNs are not well traned sne ts generalzaton ablty s defet n those ranges of all res. One future work s to exlore the reason behnd ths defet. Referene 1. Battt, R. Frst- and seond-order methods for learnng between steeest desent and Newton s method, Neural Networks, Vol. 5, 199, Blak, F. and Sholes, M. The rng of otons and ororate labltes, Journal of Poltal Eonomy, Vol. 81, 1973, Chou, Y., Lu, S. and Tsah, R. Alyng reasonng neural networks to the analyss and foreast of Tawan s stok ndex varaton, Tae Eonoms Inqury, Tae, Vol. 34, No., 1996, Hanke, M. Neural network aroxmaton of oton-rng formulas for analytal ntratable oton-rng models, Journal of Comutatonal Intellgene n Fnane, Se./Ot., 1997, Hull, J. Otons, Futures, and other Dervatves, 3rd edton, Prente-Hall, In., Huthnson, J., Lo, A. and Poggo, T. A nonarametr aroah to rng and hedgng dervatve seurtes va learnng networks, The Journal of Fnane, Vol. XLXI, No. 3, Jul., 1994, Jaobs, R.A. Inreased rate of onvergene through learnng rate adataton, Neural Networks, Vol. 1,
19 Senstvty Analyss and Neural Networks , Lajbyger, P., Boek, C., Fltman, A., and Palanswam, M. Comarng onventonal and artfal neural network models for the rng of otons on futures, NeuralVest Journal, Se., 1996, Nammohasses, R., Barnett, D., Green, D., and Smth, P. Sensor otmzaton usng neural network senstvty measures, Measurement Sene & Tehnology, Se., 1995, Rumelhart, D., Hnton, G., and Wllams, R. Learnng nternal reresentaton by error roagaton, Parallel Dstrbuted Proessng, Cambrdge, MA: MIT Press, Vol. 1, 1986, Sarkar, D. Methods to seed u error bak-roagaton learnng algorthm, ACM Comuter Surveys, Vol. 7, No. 4, De., 1995, Steger, D. and Sharda, R. Analyzng mathematal models wth ndutve learnng networks, Euroean Journal of Oeratonal Researh, 93, 1995, Takeh, H., Murakam, K. and Izumda, M. Bak roagaton learnng algorthm wth dfferent learnng oeffents for eah layer, Systems and Comuters n Jaan, Vol. 6-(7), 1995, Tsah, R. The softenng learnng roedure, Mathematal and Comuter Modellng, Vol. 18, No. 8, 1993, Tsah, R. Learnng roedure that guarantees obtanng the desred soluton of the -lasses ategorzaton learnng roblem, Proeedngs of The Frst Asa-Paf Conferene on Smulated Evoluton and Learnng, Taejon, Korea, 1996, Tsah, R. Reasonng network networks, In Ellaott, S., J. Mason & I. Anderson (Eds.), Mathemats of Neural Networks: Models, Algorthms and Alatons, Kluwer Aadem Publshers, London, 1997, Tsah, R. An Exlanaton of Reasonng Neural Networks, Mathematal and Comuter Modellng, Vol. 8, No., 1998, Tsah, R, Chen, W. and Ln, Y. Alaton of reasonng neural networks to fnanal swas, Journal of Comutatonal Intellgene n Fnane, Vol. 6, No. 3, 1998, Tsah, R., Hsu, Y. and La, C. Foreastng S&P 500 stok ndex futures wth the hybrd AI system, Deson Suort Systems, Vol. 3, No., 1998,
20 0 0. Wang, S. The unredtablty of standard bak roagaton neural networks n lassfaton alatons, Management Sene, Vol.41, No. 3, 1995, Yoon, Y., Gumaraes, T. and Swales, G. Integraton artfal neural networks wth rule-based exert system, Deson Suort Systems, 11, 1994,
21 Senstvty Analyss and Neural Networks 1 Ray Tsah, also known as Rua-Hua n Tsah, s a rofessor at Natonal Chengh Unversty, Tae, Tawan. He reeved hs Ph.D. n Oeratons Researh n 1991 from Unversty of Calforna, Berkeley. Hs researh nterests are Develong new Neural Networks and Alyng Neural Networks to Fnane. Hs most reent work has been ublshed n Comuter & Oeratons Researh, Deson Suort Systems, Mathematal and Comuter Modellng, Advanes n Paf Basn Busness, Eonoms and Fnane, Journal of Comutatonal Intellgene n Fnane, Mathemats of Neural Networks: Models, Algorthms and Alatons,,,, and Y-ng Ln reeve a M.S. degree n 1998,,. Hsou-we Wllam Ln s a rofessor at Deartment of Internatonal Busness, Natonal Tawan Unversty. He reeved hs Ph.D. n Busness n 1994 from Stanford Unversty. Hs researh nterests are Fnanal Innovaton, Fnanal Statement Analyss, and Fnanal Foreasts. Hs most reent work has been ublshed n Journal of Aountng and Eonoms, Journal of Fnanal Studes, The Chnese Aountng Revew, NTU Management Revew, Internatonal Journal of Aountng Studes, Revew of Seurtes and Futures Markets, Revew of Quanttatve Fnane and Aountng, Asa Paf Management Revew, Journal of Management, and Sun Yat-Sen Management Revew. majorng n Management Informaton System n Natonal Chengh Unversty, Tae, Tawa