#include <artmap.h>
Public Types | |
enum | RunModeType { FUZZY, DEFAULT, IC, DISTRIB } |
The available ARTMAP models (See main page for differences between models). More... | |
Public Member Functions | |
artmap (int M, int L) | |
Artmap class constructor - Allocates internal arrays, sets initial values. | |
~artmap () | |
Artmap class destructor - frees array storage. | |
void | train (float *a, int K) |
Trains the artmap network: Given input vector a, predict target class K. | |
void | test (float *a) |
Tests a trained artmap network: given an input a, sets (retrieved via getOutput()). | |
float | getOutput (int k) |
Returns the k-th output (distributed prediction). | |
int | getMaxOutputIndex () |
Returns the index of the largest output prediction, which in a winner-take-all situation (fuzzy ARTMAP) is the predicted class. | |
void | fwrite (ofstream &ofs) |
Writes out all the persistent data describing a trained ARTMAP network. | |
void | fread (ifstream &ifs, string &specialRequest) |
Loads a trained ARTMAP network from a file. | |
void | setParam (const string &name, const string &value) |
Provides a string-based interface for setting ARTMAP parameters. | |
int | getC () |
Returns the number of category nodes (aka templates learned by the network). | |
int | getNodeClass (int j) |
Returns the output class associated with a category node with the given index. | |
int | getLtmRequired () |
Returns the number of bytes required to store the weights for the network. | |
float & | tauIj (int i, int j) |
Accessor method for retrieving the model's bottom-up weights ( values). | |
float & | tauJi (int i, int j) |
Accessor method for retrieving the model's top-down ( values). | |
int | getOutputType (const string &name) |
Given the name of an output request, returns a number specifying the type of the output (at the moment, all outputs are of type int). | |
int | getInt (const string &name) |
Returns the value of the specified item as an integer. | |
float | getFloat (const string &name) |
Returns the value of the specified item as a floating point number. | |
string & | getString (const string &name) |
Returns the value of the specified item as a string. | |
void | requestOutput (const string &name, ofstream *ost) |
Registers a request for an output to be directed to a file. | |
void | closeStreams () |
Closes any streams that were passed during a requestOutput() call. | |
void | setNetworkType (RunModeType v) |
Accessor method. | |
void | setM (int v) |
Accessor method. | |
void | setL (int v) |
Accessor method. | |
void | setRhoBar (float v) |
Accessor method. | |
void | setRhoBarTest (float v) |
Accessor method. | |
void | setAlpha (float v) |
Accessor method. | |
void | setBeta (float v) |
Accessor method. | |
void | setEps (float v) |
Accessor method. | |
void | setP (float v) |
Accessor method. | |
RunModeType | getNetworkType () |
Accessor method. | |
int | getM () |
Accessor method. | |
int | getL () |
Accessor method. | |
float | getRhoBar () |
Accessor method. | |
float | getRhoBarTest () |
Accessor method. | |
float | getAlpha () |
Accessor method. | |
float | getBeta () |
Accessor method. | |
float | getEps () |
Accessor method. | |
float | getP () |
Accessor method. | |
Private Member Functions | |
void | complementCode (float *a) |
Complement-codes the input vector. | |
int | F0_to_F2_signal () |
Computes the signal function from the F0 to the F2 layer, for each of the C category nodes. | |
void | newNode () |
Adds a new node to the F2 layer. | |
void | CAM_distrib () |
Implements the Increased-Gradient Content-Addressable-Memory (CAM) model. | |
void | CAM_WTA () |
Implements the Winner-Take-All (WTA) Content-Addressable-Memory (CAM) model. | |
void | F1signal_WTA () |
Propagates a WTA signal from the to the layer. | |
void | F1signal_distrib () |
Propagates a distributed signal from the to the layer. | |
bool | passesVigilance () |
Implements the vigilance criterion, testing that the match is good enough. | |
int | prediction_distrib () |
Propagates a distributed signal from to . | |
int | prediction_WTA () |
Propagates a signal from the winning node to . | |
void | matchTracking () |
In response to a predictive mismatch, raises vigilance so next choice is more conservative. | |
void | creditAssignment () |
Adjusts activations prior to resonance (distributed mode only), so that nodes making the wrong prediction aren't allowed to learn. | |
void | resonance_distrib () |
When a successful prediction has been made, adjust the long-term memory weights to accomodate the newly matched sample (distributed version). | |
void | resonance_WTA () |
When a successful prediction has been made, adjust the long-term memory weights to accomodate the newly matched sample (WTA version). | |
void | growF2 (float factor) |
Increase the pool of nodes available for the layer. | |
float | cost (float x) |
This cost function takes the input signal to an F2 node, and rescales the metric so that nodes that match the training/test sample being evaluated well have low cost. | |
void | toStr () |
Logs all the ARTMAP network details. | |
void | toStr_dimensions () |
Logs the network's dimensions. | |
void | toStr_A () |
Logs the activations of the F1 node field, that is, the complement-coded input vector. | |
void | toStr_nodeJTSH (int j) |
Logs the F2 input signal, along with its tonic and phasic components (H = ). | |
void | toStr_nodeJdetails (int j) |
Logs the F2 node activation (pre/post normalization = y/Y), and the class to which the node maps. | |
void | toStr_nodeJtauIj (int j) |
Logs the Jth node's bottom-up thresholds ( ). | |
void | toStr_nodeJtauJi (int j) |
Logs the Jth node's top-down thresholds ( ). | |
void | toStr_x () |
Logs the match field activations. | |
void | toStr_sigma_i () |
Logs the F3->F1 signal . | |
void | toStr_sigma_k () |
Logs the output prediction . | |
Private Attributes | |
RunModeType | NetworkType |
Controls the algorithm used. | |
int | M |
Number of inputs (before complement-coding). | |
int | L |
Number of output classes ([1-L], not [0-(L-1)]). | |
float | RhoBar |
Baseline vigilance - training. | |
float | RhoBarTest |
Baseline vigilance - testing. | |
float | Alpha |
Signal rule parameter. | |
float | Beta |
Learning rate. | |
float | Eps |
Match tracking parameter. | |
float | P |
CAM rule power parameter. | |
int | C |
Number of committed nodes. | |
int | J |
In WTA mode, index of the winning node. | |
int | K |
The target class (1-L, not 0-(L-1)). | |
float | rho |
Current vigilance. | |
float * | A |
Index ranges - i: 1-M, j: 1-C; k: 1-L Indexed by i - Complement-coded input. | |
float * | x |
Indexed by i - F1, matching. | |
float * | y |
Indexed by j - F2, coding. | |
float * | Y |
Indexed by j - F3, counting. | |
float * | T |
Indexed by j - Total F0->F2. | |
float * | S |
Indexed by j - Phasic F0->F2. | |
float * | H |
Indexed by j - Tonic F0->F2 (Capital Theta). | |
float * | c |
Indexed by j - F2->F3. | |
bool * | lambda |
Indexed by j - T if node is eligible, F otherwise. | |
float * | sigma_i |
Indexed by i - F3->F1. | |
float * | sigma_k |
Indexed by k - F3->F0ab. | |
int * | kap |
Indexed by j - F3->Fab (small kappa). | |
float * | dKap |
Distributed version of kap. | |
float * | tIj |
Indexed by i&j - F0->F2 (tau sub ij). | |
float * | tJi |
Indexed by j&i - F3->F1 (tau sub ji). | |
bool | dMapWeights |
if true, use dKap, else use kap | |
float | Tu |
Uncommitted node activation. | |
float | sum_x |
To avoid recomputing norm. | |
int | _2M |
To keep from repeatedly calculating 2*M. | |
int | N |
Growable upper bound on coding nodes. | |
int | i |
int | j |
int | k |
Indices i, j and k, so we don't have to declare 'em everywhere. | |
ofstream * | ostCategoryActivations |
Depending on the NetworkType setting, can emulate fuzzy ARTMAP, Default ARTMAP, or the instance counting and distributed varieties. A flowchart of the training process is shown below:
ARTMAP Training Flowchart
|
The available ARTMAP models (See main page for differences between models).
|
|
Artmap class constructor - Allocates internal arrays, sets initial values.
00225 { 00226 LOG_ENTER (4, "artmap::artmap(M = " << M << ", L = " << L << ")\n"); 00227 00228 _2M = 2*M; 00229 00230 setM (M); 00231 setL (L); 00232 00233 N = default_initNumNodes; 00234 NetworkType = default_NetworkType; 00235 RhoBar = default_RhoBar; 00236 RhoBarTest = default_RhoBarTest; 00237 Alpha = default_Alpha; 00238 Beta = default_Beta; 00239 Eps = default_Eps; 00240 P = default_P; 00241 00242 A = new float[_2M]; 00243 x = new float[_2M]; 00244 sigma_i = new float[_2M]; 00245 sigma_k = new float[L]; 00246 00247 y = new float[N]; 00248 Y = new float[N]; 00249 T = new float[N]; 00250 S = new float[N]; 00251 H = new float[N]; 00252 c = new float[N]; 00253 lambda = new bool[N]; 00254 dKap = 0; // Allocated if needed in fread() 00255 kap = new int[N]; 00256 00257 tIj = new float[_2M*N]; 00258 tJi = new float[N*_2M]; 00259 00260 forall_j { 00261 c[j] = 0.0; 00262 foreach_i { tauIj(i, j) = tauJi(i, j) = 0.0; } 00263 } 00264 00265 C = 0; 00266 00267 Tu = float(M); 00268 00269 ostCategoryActivations = 0; 00270 dMapWeights = false; 00271 00272 LOG_LEAVE (4); 00273 }
|
|
Artmap class destructor - frees array storage.
00279 { 00280 LOG_ENTER (4, "artmap::~artmap()\n"); 00281 00282 delete[] A; 00283 delete[] x; 00284 delete[] sigma_i; 00285 delete[] sigma_k; 00286 delete[] y; 00287 delete[] Y; 00288 delete[] T; 00289 delete[] S; 00290 delete[] H; 00291 delete[] c; 00292 delete[] lambda; 00293 delete[] kap; 00294 delete[] tIj; 00295 delete[] tJi; 00296 00297 00298 LOG_LEAVE(4); 00299 }
|
|
Implements the Increased-Gradient Content-Addressable-Memory (CAM) model. In contrast to the WTA CAM rule, this CAM rule yields a distributed set of activations across the F2 layer. As in the WTA CAM, only nodes in the set are eligible for the competition. The computation is based on the cost() function as follows:
00487 { 00488 LOG_ENTER (4, "CAM_distrib()\n"); 00489 00490 float *costArray = new float[C]; 00491 bool *ptBoxArray = new bool [C]; 00492 int numPtBoxes = 0; 00493 00494 foreach_j { 00495 if (lambda[j] && ((costArray[j] = cost(T[j])) < tinyNum)) { 00496 if (LOG_LEVEL(4)) { INDENT; LOG ("Input is contained in point box at node " << j << "\n"); } 00497 ptBoxArray[j] = true; 00498 numPtBoxes++; 00499 } else { 00500 ptBoxArray[j] = false; 00501 } 00502 } 00503 00504 if (numPtBoxes == 0) { 00505 if (LOG_LEVEL(4)) { INDENT; LOG ("No point boxes contain the input\n"); } 00506 00507 float sumOfInvCosts = 0.0; // First compute denominator term 00508 foreach_j { if (lambda[j]) sumOfInvCosts += 1.0f / (pow (cost (T[j]), P)); } 00509 foreach_j { y[j] = ((lambda[j]) ? (1.0f / (pow (cost(T[j]), P) * sumOfInvCosts)) : 0.0f); } 00510 } else { 00511 if (LOG_LEVEL(4)) { INDENT; LOG ("Input is contained in " << numPtBoxes << " point boxes\n"); } 00512 foreach_j { y[j] = ((ptBoxArray[j]) ? (1.0f / numPtBoxes) : 0.0f); } 00513 } 00514 00515 // Calculate F3 activation: 00516 float denom = 0.0; 00517 if (NetworkType == DEFAULT) { 00518 foreach_j { denom += y[j]; } 00519 foreach_j { Y[j] = y[j] / denom; } 00520 } else if ((NetworkType == IC) || (NetworkType == DISTRIB)) { 00521 foreach_j { denom += c[j] * y[j]; } 00522 foreach_j { Y[j] = c[j] * y[j] / denom; } 00523 } else 00524 MSG_EXCEPTION ("Called increasedGradientCAM() with NetworkType = " << NetworkType); 00525 00526 if (LOG_LEVEL(5)) { foreach_j { INDENT; LOG ("Node " << j << " - "); toStr_nodeJdetails(j); } } 00527 00528 delete[] costArray; 00529 delete[] ptBoxArray; 00530 00531 LOG_LEAVE(4) 00532 }
|
|
Implements the Winner-Take-All (WTA) Content-Addressable-Memory (CAM) model.
The node with the largest value that is also in the set is declared the winner. Its activation value is set to 1, and that of all the other nodes is set to 0. 00417 { 00418 LOG_ENTER (4, "CAM_WTA() - "); 00419 00420 isFromTie = false; 00421 vector<int> tied; 00422 float maxTj = -1.0; 00423 foreach_j { // Looking only at eligible nodes, find all maximal T[j]s 00424 if (lambda[j]) { 00425 if (T[j] > maxTj) { 00426 tied.clear(); tied.push_back(j); 00427 maxTj = T[j]; 00428 } else if (fabs (T[j]-maxTj) < tinyNum) { // floating pt '==' 00429 tied.push_back(j); 00430 } 00431 } 00432 00433 y[j] = Y[j] = 0; 00434 } 00435 00436 // If tie for winner, choose at random from list of ties 00437 int idx = (tied.size() > 1) ? (rand() % (int)tied.size()) : 0; 00438 J = tied.at(idx); 00439 00440 if (LOG_LEVEL(4)) { 00441 if (tied.size() > 1) { 00442 isFromTie = true; 00443 LOG((int)tied.size() << " nodes tied: "); 00444 for (int i = 0; i < tied.size(); i++) { 00445 LOG(tied.at(i) << " (" << T[tied.at(i)] << "), "); 00446 } 00447 LOG (" -> chose " << J << " at random"); 00448 LOG (endl); 00449 } 00450 } 00451 00452 y[J] = Y[J] = 1; 00453 00454 lambda[J] = false; // Node J no longer eligible 00455 00456 if (LOG_LEVEL(4)) { LOG ("Node "<<J<<" wins\n"); } 00457 00458 LOG_LEAVE(4); 00459 }
|
|
Closes any streams that were passed during a requestOutput() call.
01100 { 01101 01102 if (ostCategoryActivations != 0) { 01103 ostCategoryActivations->close(); 01104 delete ostCategoryActivations; 01105 ostCategoryActivations = 0; 01106 } 01107 }
|
|
Complement-codes the input vector. In other words, doubles the size of the input vector by setting and .
00314 { 00315 LOG_ENTER (5, "complementCode(): ") 00316 00317 for (i = 0; i < M; i++) { 00318 A[i] = a[i]; 00319 A[i+M] = 1.0f - a[i]; 00320 } 00321 00322 if (LOG_LEVEL(5)) { INDENT; toStr_A(); } 00323 00324 LOG_LEAVE(5) 00325 }
|
|
This cost function takes the input signal to an F2 node, and rescales the metric so that nodes that match the training/test sample being evaluated well have low cost. It reaches a minimum of zero when the argument is equal to , which corresponds to the training/test sample falling within a point category box.
|
|
Adjusts activations prior to resonance (distributed mode only), so that nodes making the wrong prediction aren't allowed to learn. The following adjustments are made to activations:
00720 { 00721 LOG_ENTER (4, "creditAssignment()\n"); 00722 00723 if (dMapWeights) MSG_EXCEPTION ("ARTMAP::creditAssignment() - distributed map weights not yet supported during training"); 00724 00725 float sum_y = 0.0; 00726 float sum_cy = 0.0; 00727 00728 // F2 blackout 00729 foreach_j { 00730 if (kap[j] != K) { y[j] = 0.0; } 00731 else { sum_y += y[j]; } 00732 } 00733 00734 foreach_j { 00735 y[j] /= sum_y; // F2 activation 00736 sum_cy += c[j] * y[j]; 00737 } 00738 00739 foreach_j { 00740 Y[j] = c[j] * y[j] / sum_cy; // F3 activation 00741 00742 if (LOG_LEVEL(6)){ INDENT; toStr_nodeJdetails(j); } 00743 } 00744 00745 // F3->F1 signal 00746 foreach_i { 00747 sigma_i[i] = 0.0; 00748 foreach_j { sigma_i[i] += pos(Y[j] - tauJi(i, j)); } 00749 } 00750 00751 if (LOG_LEVEL(6)) { INDENT; toStr_sigma_i(); } 00752 LOG_LEAVE(4) 00753 }
|
|
Computes the signal function from the F0 to the F2 layer, for each of the C category nodes.
Each node with signal value greater than the uncommited node activation is added to the set . The meaning of the set is extended in the implementation to also encompass that of the set , as used in the Distributed ARTMAP paper. Specifically, it represents those nodes that are both more active than the uncommitted node baseline activation, and those that have not yet been reset. In other words, it represents those nodes that are still eligible to encode the input. This change in meaning simplifies the algorithm and enhances the efficiency of the implementation, which only has to check a single set for node eligibility, rather than two. The value
00373 { 00374 LOG_ENTER (4, "F0_to_F2_signal() - evaluating " << C << " nodes\n"); 00375 00376 int eligibleNodes = 0; 00377 00378 foreach_j { 00379 S[j] = H[j] = 0.0; 00380 00381 foreach_i { 00382 S[j] += min (A[i], (1 - tauIj(i, j))); 00383 H[j] += tauIj(i, j); 00384 } 00385 00386 T[j] = (S[j] + (1 - Alpha) * H[j]); 00387 00388 if (T[j] >= Tu) { eligibleNodes++; lambda[j] = true; } else lambda[j] = false; 00389 } 00390 00391 if (LOG_LEVEL(5)) { 00392 if (eligibleNodes == 0) {INDENT; LOG("No nodes are similar enough!");} 00393 foreach_j { 00394 if (lambda[j]) { 00395 if (LOG_LEVEL(6)) {INDENT; LOG("Si" << j << ": "); foreach_i { LOGF(min (A[i],(1-tauIj(i,j)))); } LOG ("\n"); } 00396 if (LOG_LEVEL(6)) {INDENT; LOG("Hi" << j << ": "); foreach_i { LOGF(tauIj(i,j)); } LOG ("\n"); } 00397 INDENT; LOG ("Node " << j << " - "); toStr_nodeJTSH(j); 00398 } 00399 } 00400 } 00401 00402 LOG_LEAVE(4); 00403 00404 return eligibleNodes; 00405 }
|
|
Propagates a distributed signal from the to the layer. This code implements the equation , sending from each node to each node whatever part of the activation level exceeds the threshold . 00541 { 00542 LOG_ENTER (4, "F1signal_distrib()\n"); 00543 00544 // Calculate F3->F1 signal: 00545 foreach_i { 00546 sigma_i[i] = 0.0; 00547 foreach_j { 00548 sigma_i[i] += pos (Y[j] - tauJi(i, j)); 00549 } 00550 } 00551 00552 if (LOG_LEVEL(5)) { INDENT; toStr_sigma_i(); } 00553 LOG_LEAVE(4) 00554 }
|
|
Propagates a WTA signal from the to the layer. This code implements a special case of the distributed version F1signal_distrib(), used in cases where activation at is winner-take-all, i.e., only a single node with index is active. In this case there's no need to iterate over , and it's much more efficient to just send the signal . Note that using default ARTMAP notation, this is equivalent to . 00565 { 00566 LOG_ENTER (4, "F1signal_WTA()\n"); 00567 00568 // F3->F1 signal 00569 foreach_i { 00570 sigma_i[i] = (1 - tauJi (i, J)); 00571 } 00572 00573 if (LOG_LEVEL(5)) { INDENT; toStr_sigma_i(); } 00574 LOG_LEAVE(4) 00575 }
|
|
Loads a trained ARTMAP network from a file.
See fwrite() for the file format.
00833 { 00834 if (specialRequest == "") { // Standard case - just load weights, map field weight and instance count 00835 ifs >> C; 00836 00837 while (C > N) { growF2 (default_F2growthRate); } 00838 foreach_j { 00839 foreach_i { ifs >> tauIj(i,j); } 00840 foreach_i { ifs >> tauJi(i,j); } 00841 ifs >> kap[j] >> c[j]; 00842 } 00843 } else if (strnicmp (specialRequest, string ("dMapWeights"))) { 00844 // Special case - load distributed map field weights - C * L weights 00845 dKap = new float[N*L]; 00846 foreach_j { foreach_k { ifs >> dKap[j*L+k]; } } 00847 dMapWeights = true; 00848 00849 if (LOG_LEVEL(2)) LOG ("Distributed map weights loaded...\n"); 00850 } else MSG_EXCEPTION ("ARTMAP::fread() - unknown request: " << specialRequest); 00851 }
|
|
Writes out all the persistent data describing a trained ARTMAP network. The output format is as follows:
00872 { 00873 if (dMapWeights) { 00874 MSG_EXCEPTION ("ARTMAP::fwrite() - saving distributed map weights not implemented"); 00875 } else { 00876 ofs << C << endl; 00877 foreach_j { 00878 foreach_i { ofs << tauIj(i, j) << " "; } 00879 foreach_i { ofs << tauJi(i, j) << " "; } 00880 ofs << kap[j] << " " << c[j] << endl; 00881 } 00882 } 00883 }
|
|
Accessor method.
|
|
Accessor method.
|
|
Returns the number of category nodes (aka templates learned by the network).
00160 { return C; }
|
|
Accessor method.
|
|
|
Returns the value of the specified item as an integer. There are currently two legal requests:
01029 { 01030 LOG_ENTER (4, "artmap::getInt("<< name <<")\n"); 01031 01032 if (strnicmp (name, string("f2Nodes"))) { 01033 return C; 01034 } else if (strnicmp (name, string("memory"))) { 01035 return C * M * 4 * sizeof (float ); 01036 } else MSG_EXCEPTION ("artmap::getInt() - Unknown request (" << name << ")"); 01037 01038 LOG_LEAVE(4) 01039 }
|
|
Accessor method.
|
|
Returns the number of bytes required to store the weights for the network.
|
|
Accessor method.
|
|
Returns the index of the largest output prediction, which in a winner-take-all situation (fuzzy ARTMAP) is the predicted class.
00150 { 00151 std::valarray<float> outs = std::valarray<float> (sigma_k, L); 00152 return getIndexOfMaxElt (outs); 00153 }
|
|
Accessor method.
|
|
Returns the output class associated with a category node with the given index.
00162 { if ((j < 0) || (j > C) || dMapWeights) { return -1; } else { return kap[j]; } }
|
|
Returns the k-th output (distributed prediction).
|
|
Given the name of an output request, returns a number specifying the type of the output (at the moment, all outputs are of type int).
01009 { 01010 LOG_ENTER (4, "artmap::getOutputType("<< name <<")\n"); 01011 if (strnicmp (name, string("f2Nodes"))) { 01012 return 0; 01013 } else if (strnicmp (name, string("memory"))) { 01014 return 0; 01015 } else MSG_EXCEPTION ("artmap::getOutputType() - Unknown request (" << name << ")"); 01016 01017 LOG_LEAVE(4) 01018 }
|
|
Accessor method.
|
|
Accessor method.
|
|
Accessor method.
|
|
Returns the value of the specified item as a string. There are currently no legal requests.
01067 { 01068 LOG_ENTER (4, "artmap::getString("<< name <<")\n"); 01069 01070 if (0) { 01071 01072 } else MSG_EXCEPTION ("artmap::getString() - Unknown request (" << name << ")"); 01073 01074 LOG_LEAVE(4) 01075 }
|
|
Increase the pool of nodes available for the layer. This method has been temporarily disabled (there seemed to be a bug associated with growing the layer). Until the bug is resolved, a fixed pool of 20,000 nodes is available for growing the layer. 00892 { 00893 LOG_ENTER (4, "growF2()\n"); 00894 00895 //****************** 00896 00897 cout << "Ran out of F2 nodes - increase default_initNumNodes" << endl; 00898 00899 exit (0); 00900 #if 0 00901 if (factor <= 1.0) { cout << "Growth factor (" <<factor<< ") must be > 1\n"; exit (0); } 00902 00903 int newN = int (N * factor); 00904 00905 float *fTmp; int *iTmp; bool *bTmp; 00906 00907 fTmp = new float[newN]; for (int i = 0; i < N; i++) { fTmp[i] = y[i]; } delete[] y; y = fTmp; 00908 fTmp = new float[newN]; for (int i = 0; i < N; i++) { fTmp[i] = Y[i]; } delete[] Y; Y = fTmp; 00909 fTmp = new float[newN]; for (int i = 0; i < N; i++) { fTmp[i] = T[i]; } delete[] T; T = fTmp; 00910 fTmp = new float[newN]; for (int i = 0; i < N; i++) { fTmp[i] = S[i]; } delete[] S; S = fTmp; 00911 fTmp = new float[newN]; for (int i = 0; i < N; i++) { fTmp[i] = H[i]; } delete[] H; H = fTmp; 00912 fTmp = new float[newN]; for (int i = 0; i < N; i++) { fTmp[i] = c[i]; } delete[] c; c = fTmp; 00913 00914 bTmp = new bool[newN]; for (int i = 0; i < N; i++) { bTmp[i] = lambda[i]; } delete[] lambda; lambda = bTmp; 00915 iTmp = new int [newN]; for (int i = 0; i < N; i++) { iTmp[i] = kap[i]; } delete[] kap; kap = iTmp; 00916 00917 fTmp = new float[newN*_2M]; for (int i = 0; i < N*_2M; i++) { fTmp[i] = tIj[i]; } delete[] tIj; tIj = fTmp; 00918 fTmp = new float[newN*_2M]; for (int i = 0; i < N*_2M; i++) { fTmp[i] = tJi[i]; } delete[] tJi; tJi = fTmp; 00919 00920 N = newN; 00921 #endif 00922 LOG_LEAVE(4) 00923 }
|
|
In response to a predictive mismatch, raises vigilance so next choice is more conservative. Vigilance ( ) is raised to . 00699 { 00700 LOG_ENTER (4, "matchTracking() "); 00701 00702 rho = (sum_x / M) + Eps; 00703 00704 if (LOG_LEVEL(4)) { LOG ("Raised vigilance to " << rho << "\n"); }; 00705 00706 LOG_LEAVE(4); 00707 }
|
|
Adds a new node to the F2 layer. The new node is set as the WTA winner of the competition at F2, and its thresholds, map weights, instance count and activation values are initialized. 00334 { 00335 if (LOG_LEVEL(4)) { INDENT; LOG ("Committing new node: " << C+1 << " mapping to category " << K << "\n"); } 00336 00337 if (C == (N-1)) growF2(default_F2growthRate); 00338 00339 J = C++; 00340 00341 if (dMapWeights) { 00342 foreach_k { dKap[J*L + k] = 0.0f; } 00343 dKap[J*L + K] = 1.0f; 00344 } else { 00345 kap[J] = K; 00346 } 00347 00348 foreach_k { sigma_k[k] = 0; } 00349 sigma_k[K] = 1; 00350 00351 foreach_j { y[j] = Y[j] = 0; } 00352 y[J] = Y[J] = 1; 00353 00354 // Set initial values for the node's weights 00355 foreach_i { tauIj(i,J) = tauJi(i,J) = 1.0f - A[i]; } 00356 00357 c[J] = 1; // Initialize instance count 00358 }
|
|
Implements the vigilance criterion, testing that the match is good enough.
In other words, checks that the selected category node encodes a template that is close enough to the input vector, as compared to the vigilance parameter . More specifically, the function returns false if holds.
The inaccuracy of floating point computation requires a slight fudge in the comparison to . This was in response to a bug in which a newly committed node did not satisfy the vigilance criterion when it was set to maximum (1.0), as was infinitesimaly smaller than . The value 00592 { 00593 LOG_ENTER (4, "passesVigilance()\n"); 00594 00595 bool result; 00596 sum_x = 0.0; 00597 00598 foreach_i { sum_x += min (A[i], sigma_i[i]); } 00599 00600 if (LOG_LEVEL(6)) { INDENT; LOG ("sum_x = " << sum_x << "\n"); } 00601 00602 // Decrease rho by tiny amount - compensate for float inaccuracy 00603 if ((sum_x / M) < (rho - tinyNum)) { 00604 if (LOG_LEVEL(4)) { INDENT; LOG(std::setprecision(8)<< "Failed vigilance, " <<sum_x<< " / " <<M<< " < " <<rho<< "\n");} 00605 result = false; 00606 } 00607 else { 00608 if (LOG_LEVEL(4)) { 00609 INDENT; 00610 if (rho < tinyNum) { 00611 LOG ("Passed vigilance w/ rho = 0\n"); 00612 } else { 00613 LOG ("Passed vigilance, " <<sum_x<< " / " <<M<< " >= " <<rho<< "\n"); 00614 } 00615 } 00616 result = true; 00617 } 00618 00619 LOG_LEAVE(4) 00620 00621 return result; 00622 }
|
|
Propagates a distributed signal from to . More specifically, the function implements
for those values of for which for some . In addition, it sets , the index of the predicted output class, to the index of the largest value.
00635 { 00636 LOG_ENTER (4, "prediction_distrib()\n"); 00637 00638 foreach_k { sigma_k[k] = 0.0; } 00639 if (dMapWeights) { 00640 foreach_j { foreach_k {sigma_k[k] += dKap[j*L + k] * Y[j]; }} 00641 } else { 00642 foreach_j { sigma_k[(int) kap[j]] += Y[j]; } 00643 } 00644 00645 if (LOG_LEVEL(5)) { INDENT; toStr_sigma_k(); } 00646 00647 float sigma_kp = -1.0; 00648 int Kprime = -1; 00649 00650 foreach_k { if (sigma_k[k] > sigma_kp) { Kprime = k; sigma_kp = sigma_k[k]; } } 00651 00652 if (LOG_LEVEL(4)) { INDENT; LOG ("Predicting class " << Kprime << "\n"); } 00653 LOG_LEAVE(4) 00654 00655 return Kprime; 00656 }
|
|
Propagates a signal from the winning node to .
00663 { 00664 LOG_ENTER (4, "prediction_WTA()\n"); 00665 00666 foreach_k { sigma_k[k] = 0.0; } 00667 00668 int Kprime = -1; 00669 00670 if (dMapWeights) { 00671 foreach_k {sigma_k[k] += dKap[J*L + k]; } 00672 00673 float sigma_kp = -1.0; 00674 00675 foreach_k { // find biggest 00676 if (sigma_k[k] > sigma_kp) { 00677 Kprime = k; sigma_kp = sigma_k[k]; 00678 } 00679 } 00680 } else { 00681 Kprime = kap[J]; 00682 00683 sigma_k[Kprime]= 1.0; 00684 } 00685 00686 if (LOG_LEVEL(4)) { INDENT; LOG ("Predicting class " << Kprime << "\n"); } 00687 00688 LOG_LEAVE(4) 00689 00690 return Kprime; 00691 }
|
|
Registers a request for an output to be directed to a file. Currently the only legal request is "yjs", but this method is provided to allow for additional output requests to be added in the future. The request "yjs" results in the values (category node activations) being logged for each test sample.
01087 { 01088 LOG_ENTER (4, "artmap::requestOutput("<< name <<")\n"); 01089 01090 if (strnicmp (name, string ("yjs"))) { 01091 ostCategoryActivations = ost; 01092 } else MSG_EXCEPTION ("artmap::requestOutput() - unknown request (" << name << ")");; 01093 01094 LOG_LEAVE(4) 01095 }
|
|
When a successful prediction has been made, adjust the long-term memory weights to accomodate the newly matched sample (distributed version). For the distributed version of resonance, the thresholds attached to all the active nodes have to be adjusted, according to:
and the node's instance counts are increased: . 00788 { 00789 LOG_ENTER (4, "resonance_distrib()\n") 00790 00791 foreach_j { 00792 foreach_i { 00793 // Increase F0->F2 threshold (distributed instar) 00794 tauIj(i, j) += Beta * pos(y[j] - tauIj(i, j) - A[i]); 00795 00796 // Increase F3->F1 threshold (distributed outstar) 00797 if (sigma_i[i] != 0.0) { 00798 tauJi(i, j) += Beta * (pos(sigma_i[i] - A[i]) / sigma_i[i]) * pos(Y[j] - tauJi(i, j)); 00799 } 00800 } 00801 00802 if (LOG_LEVEL(5)) { INDENT; toStr_nodeJtauIj(j); INDENT; toStr_nodeJtauJi(j); } 00803 00804 c[j] += y[j]; // Increase F2->F3 instance counting weights 00805 } 00806 00807 LOG_LEAVE(4) 00808 }
|
|
When a successful prediction has been made, adjust the long-term memory weights to accomodate the newly matched sample (WTA version). For the WTA version of resonance, just the thresholds attached to the winning node have to be adjusted, according to: , and the node's instance count is incremented: . 00763 { 00764 LOG_ENTER (4, "resonance_wta()\n"); 00765 00766 // Use winner-take-all version of learning law 00767 foreach_i { tauIj(i,J) = tauJi(i,J) = tauIj(i,J) + Beta * pos (1.0f - tauIj(i,J) - A[i]); } 00768 00769 c[J] += y[J]; // Increase F2->F3 instance counting weights 00770 00771 if (LOG_LEVEL(5)) { INDENT; LOG ("IC = " << c[J] << ", "); toStr_nodeJtauIj(J); } 00772 LOG_LEAVE(4) 00773 }
|
|
Accessor method.
|
|
Accessor method.
|
|
Accessor method.
|
|
Accessor method.
|
|
Accessor method.
|
|
Accessor method.
|
|
Accessor method.
|
|
Provides a string-based interface for setting ARTMAP parameters. Legal values are:
00958 { 00959 istringstream valOss (value); 00960 float val; 00961 00962 LOG_ENTER (4, "artmap::setParam()\n"); 00963 00964 if (LOG_LEVEL(4)) LOG ("Setting " << name.c_str() << " to " << value.c_str() << "\n"); 00965 00966 if (strnicmp (name, string("Model"))) { 00967 if (strnicmp (value, string("fuzzy"))) setNetworkType (FUZZY); 00968 else if (strnicmp (value, string("default"))) setNetworkType (DEFAULT); 00969 else if (strnicmp (value, string("ic"))) setNetworkType (IC); 00970 else if (strnicmp (value, string("distrib"))) setNetworkType (DISTRIB); 00971 else MSG_EXCEPTION ("ArtmapParams::setValue() - Unknown Artmap model requested (" << value << ")"); 00972 00973 } else if (strnicmp (name, string("RhoBar"))) { 00974 if (!(valOss >> val)) { MSG_EXCEPTION ("ArtmapParams::setValue() - Couldn't read value of RhoBar"); } 00975 if ((val < 0) || (val > 1.0)) MSG_EXCEPTION ("ArtmapParams::setValue() - Rhobar out of range [0, 1]"); 00976 RhoBar = val; 00977 00978 } else if (strnicmp (name, string("RhoBarTest"))) { 00979 if (!(valOss >> val)) { MSG_EXCEPTION ("ArtmapParams::setValue() - Couldn't read value of RhoBarTest"); } 00980 if ((val < 0) || (val > 1.0)) MSG_EXCEPTION ("ArtmapParams::setValue() - RhobarTest out of range [0, 1]"); 00981 RhoBarTest = val; 00982 00983 } else if (strnicmp (name, string("Alpha"))) { 00984 if (!(valOss >> val)) { MSG_EXCEPTION ("ArtmapParams::setValue() - Couldn't read value of Alpha"); } 00985 Alpha = val; 00986 00987 } else if (strnicmp (name, string("Beta"))) { 00988 if (!(valOss >> val)) { MSG_EXCEPTION ("ArtmapParams::setValue() - Couldn't read value of Beta"); } 00989 Beta = val; 00990 00991 } else if (strnicmp (name, string("Eps"))) { 00992 if (!(valOss >> val)) { MSG_EXCEPTION ("ArtmapParams::setValue() - Couldn't read value of Eps"); } 00993 Eps = val; 00994 00995 } else if (strnicmp (name, string("P"))) { 00996 if (!(valOss >> val)) { MSG_EXCEPTION ("ArtmapParams::setValue() - Couldn't read value of P"); } 00997 P = val; 00998 } else MSG_EXCEPTION ("artmap::setParam() - Unknown param (" << name << ")"); 00999 01000 LOG_LEAVE(4) 01001 }
|
|
Accessor method.
|
|
Accessor method.
|
|
Accessor method for retrieving the model's bottom-up weights ( values).
|
|
Accessor method for retrieving the model's top-down ( values).
|
|
Tests a trained artmap network: given an input a, sets (retrieved via getOutput()).
ARTMAP Testing Flowchart 00179 { 00180 LOG_ENTER (4, "\ntest("); 00181 if (LOG_LEVEL(4)) { INDENT; LOG ("["); for (i=0;i<M;i++)LOGF(a[i]); LOG ("])\n"); } 00182 00183 isFromTie = false; 00184 bool dontKnow = false; 00185 bool d = (NetworkType == FUZZY) ? false : true; 00186 00187 rho = RhoBarTest; 00188 00189 complementCode(a); 00190 00191 int eligibleNodes = F0_to_F2_signal(); 00192 00193 if (eligibleNodes > 0) { 00194 if (d) { 00195 CAM_distrib(); F1signal_distrib(); 00196 } else { 00197 CAM_WTA(); F1signal_WTA(); 00198 } 00199 if (passesVigilance()) 00200 if (d) {prediction_distrib(); } else { prediction_WTA(); } 00201 else { 00202 dontKnow = true; 00203 } 00204 } else { 00205 dontKnow = true; 00206 } 00207 00208 if (LOG_LEVEL(4)) { if (isFromTie) LOG("Node " << J << " (from tie) wins" << endl); } 00209 00210 if (dontKnow) { 00211 foreach_k { sigma_k[k] = 1; } // "Don't know" sets all outputs to 1 00212 } 00213 00214 if (ostCategoryActivations) { foreach_j { (*ostCategoryActivations) << Y[j] << " "; } (*ostCategoryActivations) << endl; } 00215 LOG_LEAVE(4) 00216 }
|
|
Logs all the ARTMAP network details.
00109 { 00110 LOG ("*******************************************************\n"); 00111 toStr_dimensions(); 00112 LOG ("J: " << J << ", K: " << K << "\n\n"); 00113 00114 toStr_A(); 00115 LOG ("\n--------------------------------------------\n"); 00116 foreach_j { 00117 LOG ("Node " << j << "\n"); 00118 toStr_nodeJdetails(j); 00119 toStr_nodeJtauIj(j); 00120 toStr_nodeJtauJi(j); 00121 LOG ("--------------------------------------------\n"); 00122 } 00123 00124 toStr_x (); 00125 toStr_sigma_i (); 00126 toStr_sigma_k (); 00127 00128 LOG ("=======================================================\n"); 00129 }
|
|
Logs the activations of the F1 node field, that is, the complement-coded input vector.
00022 { 00023 LOG ("A: ("); 00024 for (i = 0; i < M; i++) LOGF(A[i]); 00025 LOG (") ( "); 00026 for (i = M; i < _2M; i++) { LOGF(A[i]); } 00027 LOG (")\n"); 00028 }
|
|
Logs the network's dimensions.
|
|
Logs the F2 node activation (pre/post normalization = y/Y), and the class to which the node maps.
00042 { 00043 LOG ("y: " << y[j] << " (c: " << c[j] << ") -> Y: " << Y[j] << " (-> class " << kap[j] << ")\n"); 00044 }
|
|
Logs the Jth node's bottom-up thresholds ( ).
00050 { 00051 LOG ("tauIj[" << j << "]: ( "); 00052 for (i = 0; i < M; i++) LOG(tauIj(i, j) << " "); 00053 LOG (") ( "); 00054 for (i = M; i < _2M; i++) LOG(tauIj(i, j) << " "); 00055 LOG (")\n"); 00056 }
|
|
Logs the Jth node's top-down thresholds ( ).
00062 { 00063 LOG ("tauJi[" << j << "]: ( "); 00064 for (i = 0; i < M; i++) LOG(tauJi(i, j) << " "); 00065 LOG (") ( "); 00066 for (i = M; i < _2M; i++) LOG(tauJi(i, j) << " "); 00067 LOG (")\n"); 00068 }
|
|
Logs the F2 input signal, along with its tonic and phasic components (H = ).
00034 { 00035 LOG ("(S: " << std::fixed << std::setprecision (2) << S[j] << ", H: " << std::fixed << std::setprecision (2) << H[j] << ") -> T: " << std::fixed << std::setprecision (2) << T[j] << "\n"); 00036 }
|
|
Logs the F3->F1 signal .
00086 { 00087 LOG ("sigma_i: ("); 00088 for (i = 0; i < M; i++) LOGF(sigma_i[i]); 00089 LOG (") ( "); 00090 for (i = M; i < _2M; i++) LOGF(sigma_i[i]); 00091 LOG (")\n"); 00092 }
|
|
Logs the output prediction .
00098 { 00099 LOG ("sigma_k: "); 00100 for (k = 0; k < L; k++) LOG (sigma_k[k] << " "); 00101 LOG ("\n"); 00102 }
|
|
Logs the match field activations.
00074 { 00075 LOG ("x: ("); 00076 for (i = 0; i < M; i++) LOGF(x[i]); 00077 LOG (") ( "); 00078 for (i = M; i < _2M; i++) LOGF(x[i]); 00079 LOG (")\n"); 00080 }
|
|
Trains the artmap network: Given input vector a, predict target class K.
ARTMAP Training Flowchart 00097 { 00098 LOG_ENTER (4, "train("); 00099 if (LOG_LEVEL(4)) { LOG ("["); for (i=0;i<M;i++)LOGF(a[i]); LOG ("] --> " << K << ")\n"); } 00100 00101 bool d = (NetworkType == DISTRIB) ? true : false; 00102 bool needNewNode = false; 00103 isFromTie = false; 00104 00105 this->K = K; // Save target class 00106 00107 complementCode (a); 00108 00109 if (C == 0) { // New network - commit a new node 00110 needNewNode = true; 00111 } else { 00112 rho = RhoBar; // Reset network vigilance to baseline 00113 00114 int eligibleNodes = F0_to_F2_signal(); 00115 00116 for (;;) { // Outer loop: Match tracking 00117 bool passedVigilance = false; 00118 00119 while (eligibleNodes > 0) { // Inner loop: Vigilance 00120 if (d) { 00121 CAM_distrib(); F1signal_distrib(); 00122 } else { 00123 CAM_WTA(); F1signal_WTA(); eligibleNodes--; 00124 } 00125 00126 if (passesVigilance()) { passedVigilance = true; break; } 00127 00128 d = false; 00129 if (LOG_LEVEL(4)) { INDENT; LOG ("Match not good enough! Reverting to WTA\n"); }; 00130 } // Failed vigilance, try again 00131 00132 if (passedVigilance == false) { // Fell through without finding candidate 00133 if (LOG_LEVEL(4)) { INDENT; LOG ("Out of eligible nodes, allocating new one\n"); }; 00134 needNewNode = true; break; 00135 } 00136 00137 // Passed vigilance criterion 00138 int Kprime = (d ? prediction_distrib() : prediction_WTA()); 00139 00140 if (Kprime == K) break; 00141 00142 matchTracking(); 00143 00144 d = false; 00145 if (LOG_LEVEL(4)) { INDENT; LOG ("Predicted wrong class! Reverting to WTA\n"); }; 00146 } // predicted wrong class, try again 00147 } 00148 00149 if (LOG_LEVEL(4)) { if (isFromTie) LOG("Node " << J << " (from tie) wins" << endl); } 00150 00151 if (needNewNode) { 00152 newNode(); 00153 } else { 00154 if (NetworkType == DISTRIB) { creditAssignment(); resonance_distrib(); } else { resonance_WTA(); } 00155 } 00156 00157 LOG_LEAVE(4) 00158 }
|
|
To keep from repeatedly calculating 2*M.
|
|
Index ranges - i: 1-M, j: 1-C; k: 1-L Indexed by i - Complement-coded input.
|
|
Signal rule parameter.
|
|
Learning rate.
|
|
Indexed by j - F2->F3.
|
|
Number of committed nodes.
|
|
Distributed version of kap.
|
|
if true, use dKap, else use kap
|
|
Match tracking parameter.
|
|
Indexed by j - Tonic F0->F2 (Capital Theta).
|
|
|
|
|
|
In WTA mode, index of the winning node.
|
|
Indices i, j and k, so we don't have to declare 'em everywhere.
|
|
The target class (1-L, not 0-(L-1)).
|
|
Indexed by j - F3->Fab (small kappa).
|
|
Number of output classes ([1-L], not [0-(L-1)]).
|
|
Indexed by j - T if node is eligible, F otherwise.
|
|
Number of inputs (before complement-coding).
|
|
Growable upper bound on coding nodes.
|
|
Controls the algorithm used.
|
|
|
|
CAM rule power parameter.
|
|
Current vigilance.
|
|
Baseline vigilance - training.
|
|
Baseline vigilance - testing.
|
|
Indexed by j - Phasic F0->F2.
|
|
Indexed by i - F3->F1.
|
|
Indexed by k - F3->F0ab.
|
|
To avoid recomputing norm.
|
|
Indexed by j - Total F0->F2.
|
|
Indexed by i&j - F0->F2 (tau sub ij).
|
|
Indexed by j&i - F3->F1 (tau sub ji).
|
|
Uncommitted node activation.
|
|
Indexed by i - F1, matching.
|
|
Indexed by j - F3, counting.
|
|
Indexed by j - F2, coding.
|