Author | Description | Date submitted | % Error rate |
0108 | Adaboost with decision stump as weak learner | Wed Jan 8 18:35:33 2014 | 2.8 |
0108 | Adaboost with decision stump as weak learner | Sun Jan 12 13:35:33 2014 | 0.6 |
0108 | Bagging with decision stump as weak learner | Sun Jan 12 14:19:35 2014 | 3.8 |
0108 | Adaboost with decision stump as weak learner | Sun Jan 12 14:22:07 2014 | 0.6 |
0108 | Bagging with decision stump as weak learner | Sun Jan 12 14:59:41 2014 | 3.7 |
AFC | An (initial) implementation of K nearest neighbors with K = sqrt(number of training samples). | Thu Jan 2 16:52:21 2014 | 2.8 |
AFC | An implementation of K nearest neighbors with empirically optimized K values. | Thu Jan 2 20:59:59 2014 | 0.7 |
ASapp | Nearest Neighbor Algorithm with k = 5. Normalizes using mean and standard deviation of each attribute. | Thu Jan 9 23:48:40 2014 | 48.5 |
ASapp | Nearest Neighbor Algorithm with k = 5. Normalizes using mean and standard deviation of each attribute. | Sat Jan 11 01:33:38 2014 | 1.1 |
Aaron Doll | Decision tree with reduced error pruning. | Thu Dec 26 16:45:48 2013 | 1.7 |
Aaron Doll | This is an implementation of the random forest algorithm | Thu Jan 9 18:13:46 2014 | 1.1 |
Aaron Doll | This is an implementation of the random forest algorithm | Fri Jan 10 14:50:20 2014 | 0.6 |
Aaron Doll | This is an implementation of the random forests with m=1, 400 trees | Fri Jan 10 22:18:09 2014 | 0.7 |
Aaron Doll | This is an implementation of the random forests with m=1, 400 trees | Sat Jan 11 02:44:42 2014 | 0.6 |
Ameera and David | Decision Tree Learning algorithm implementation. | Sun Jan 12 22:04:52 2014 | 2.4 |
Andra Constantinescu and Bar Shabtai | Vanilla single layer neural network algorithm. Takes binary input and loops through all training examples to update the weights of each attribute. Alpha = 0.1. Very nice, I like! | Thu Jan 9 21:00:47 2014 | 1.1 |
Andra Constantinescu and Bar Shabtai | Vanilla single layer neural network algorithm. Takes binary input and loops through all training examples to update the weights of each attribute. Alpha and epochs optimized for each dataset. | Tue Jan 14 02:29:11 2014 | 0.5 |
Andra Constantinescu and Bar Shabtai | AdaBoost with decision stump as the weak learner.Number of iterations of AdaBoost optimized per example. | Tue Jan 14 02:55:45 2014 | 2.2 |
Andra Constantinescu and Bar Shabtai | Random Forest with number of trees, N and M optimized for each dataset! | Tue Jan 14 07:12:37 2014 | 0.7 |
Andra Constantinescu and Bar Shabtai | AdaBoost on a single layer neural network The neural classifier takes binary input and loops through all training examples to update the weights of each attribute. Number of boosting rounds optimized for data set (here 2) | Tue Jan 14 16:20:19 2014 | 1.2 |
Andrew Werner | AdaBoost using vanilla decision trees as the weak learner | Thu Jan 2 22:55:08 2014 | 0.7 |
Andrew Werner | AdaBoost using vanilla decision trees as the weak learner | Sat Jan 4 15:01:34 2014 | 0.7 |
Anna Ren (aren) and Sunny Xu (ziyangxu)<BR><BR>Sunnanna | Adaboost using 150 rounds of boosting and decision stumps as a weak learner | Thu Jan 9 22:39:29 2014 | 0.6 |
B&Y | We use the voted-perceptron algorithm. It runs repeatedly on each training set until it finds a prediction vector which is correct on all examples. We keep track of the survival times for each new prediction vector. These weights help us make a final binary prediction using a weighted majority vote. | Thu Jan 9 23:14:49 2014 | 0.6 |
BPM | I implemented AdaBoost with binary decision stumps and 100 rounds of boosting. | Wed Jan 8 22:42:32 2014 | 0.7 |
Boar Ciphers | Implements a single-layer neural network with 100 epochs and a 0.001 learning rate | Tue Jan 14 10:53:40 2014 | 0.8 |
Bob Dondero | A decision tree learning algorithm using information gain and chi-squared pruning. | Thu Jan 9 20:39:27 2014 | 5.4 |
Bob Dondero | Adaboost (200 rounds) with weak learner as a decision tree (max depth 5) and chi-squared pruning (1%). | Thu Jan 9 20:54:23 2014 | 3.0 |
Bob Dondero | Adaboost (200 rounds) with weak learner as a decision tree (max depth 5) and chi-squared pruning (1%) | Fri Jan 10 23:04:11 2014 | 1.6 |
CAPSLOCK | A mostly vanilla decision tree. Uses some cool data structures though. | Thu Jan 9 22:57:33 2014 | 2.4 |
CC | AdaBoost with Decision Stump for 1000 rounds. | Thu Jan 9 14:40:01 2014 | 0.7 |
CC | AdaBoost with Decision Stump for 5000 rounds. | Thu Jan 9 14:49:58 2014 | 0.7 |
CC | AdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given round | Mon Jan 13 16:07:10 2014 | 0.5 |
CC | AdaBoost with Decision Stump for 50 rounds. | Mon Jan 13 21:31:50 2014 | 0.6 |
CC | AdaBoost with Decision Stump for 150 rounds. | Tue Jan 14 10:51:50 2014 | 0.6 |
CC | AdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given round | Tue Jan 14 11:13:03 2014 | 0.5 |
CC | AdaBoost with Neural Networks as the learner. Uses the percentage with the highest weights of the data to make the hypothesis on a given round | Tue Jan 14 11:41:07 2014 | 0.5 |
CTTT | Adaboost + Decision Stumps (200 rounds). | Mon Jan 6 20:44:55 2014 | 0.6 |
CTTT | Decision Tree Algorithm with Chi-Squared Pre-Pruning | Mon Jan 6 20:48:45 2014 | 1.8 |
CTTT | A decision stump weak learner. | Mon Jan 6 22:58:42 2014 | 7.2 |
Caligula | An implementation of AdaBoost with decision stumps and 800 rounds of boosting. | Wed Jan 8 19:46:23 2014 | 0.7 |
Cam Porter | A version of the AdaBoost learning algorithm that uses decision stumps as a weak learning base. | Thu Jan 9 22:47:38 2014 | 0.7 |
Catherine Wu and Yan Wu | AdaBoost using random sampling and Decision Trees | Wed Jan 8 22:51:13 2014 | 3.1 |
Charliezsc | Adaboost with Decision Stumps | Thu Jan 9 18:21:10 2014 | 0.6 |
Charliezsc | Decision Stump without boosting | Thu Jan 9 19:19:26 2014 | 7.2 |
Charliezsc | Bagging with Decision Stumps (200 weak learners and half bootstrap samples) | Thu Jan 9 20:08:14 2014 | 7.2 |
Charliezsc | Bagging with Decision Stumps (200 weak learners and 2 percent bootstrap samples) | Thu Jan 9 20:49:51 2014 | 4.1 |
Chuck and Larry | Perceptron neural network | Thu Jan 9 18:00:04 2014 | 0.5 |
Chuck and Larry | AdaBooooooost!!! using binary decision stumps with 200 rounds of boosting | Thu Jan 9 18:04:16 2014 | 0.6 |
CookieMonster | This is a bagging algorithm which uses a nearest neighbor algorithm as its weak classifier. | Mon Jan 6 20:47:22 2014 | 0.7 |
CookieMonster | This is a nearest-neighbor classifier which takes a majority vote from the k nearest points in feature space using Euclidean Distance. | Thu Jan 9 22:57:24 2014 | 1.3 |
CookieMonster | This is a bagging algorithm which uses a nearest neighbor algorithm as its weak classifier. | Thu Jan 9 23:13:29 2014 | 1.8 |
DH | AdaBoost with decision trees (max depth = 10), 200 iterations | Sun Jan 5 20:22:55 2014 | 0.5 |
David Hammer | bagging (using binary decision trees) | Sun Jan 5 12:29:43 2014 | 0.9 |
Dr. Roberto | Single layer Neural Net run for 100 epochs with a learning value 0f 0.01 | Thu Jan 9 13:39:04 2014 | 0.6 |
Dr. Roberto | ADABoost with 100 rounds of Single Layer Neural Net run for 10 epochs with a varying learning value of around 0.01 | Thu Jan 9 14:51:08 2014 | 1.1 |
Dr. Steve Brule (For Your Health) | Neural Network. | Thu Jan 9 17:28:18 2014 | 0.8 |
Dr. Steve Brule (For Your Health) | Neural Network. | Sun Jan 12 19:58:48 2014 | 0.8 |
Dr. Steve Brule (For Your Health) | Neural Network trained for 100 epochs. | Tue Jan 14 15:38:41 2014 | 0.7 |
EC | Neural Net | Tue Jan 7 18:40:44 2014 | 0.6 |
Epic Harbors | Adaboost with decision stumps as the weak learner and 250 rounds of boosting | Thu Jan 9 15:00:12 2014 | 0.7 |
Estranged Egomaniac | AdaBoost with decision stumps. 250 rounds of boosting. | Sun Jan 12 13:54:21 2014 | 90.2 |
Fanny | An ensemble learning algorithm that consists of AdaBoost using decision stumps as weak learner. | Sat Jan 4 06:55:07 2014 | 0.7 |
Fanny | A voted perceptron algorithm (epoch = 30) | Wed Jan 8 13:14:07 2014 | 0.4 |
Fanny | An ensemble learning algorithm that consists of AdaBoost using decision stumps as weak learner. | Sun Jan 12 04:23:03 2014 | 0.7 |
Fanny | A voted perceptron algorithm (epoch = 10) | Sun Jan 12 22:58:47 2014 | 0.5 |
Fanny | An ensemble learning algorithm that consists of AdaBoost using decision stumps as weak learner. | Mon Jan 13 21:50:32 2014 | 0.7 |
George and Katie | A simple implementation of decision trees as per R&N. | Wed Jan 8 21:45:23 2014 | 1.3 |
George and Katie | Random Forests implemented using vanilla Decision Trees and customizable depth, tree size, and bootstrap size. | Wed Jan 8 23:01:13 2014 | 1.1 |
George and Katie | Random Forests implemented using vanilla Decision Trees and customizable depth, tree size, and bootstrap size. | Thu Jan 9 20:27:29 2014 | 0.6 |
Gewang | A very simple learning algorithm that, on each test example, predicts the classification based on the k nearest neighbors during training | Sun Dec 29 11:13:13 2013 | 2.5 |
Gewang | Predicts the classification label based on the k nearest neighbors | Sun Dec 29 12:33:06 2013 | 3.9 |
Gewang | Predicts the classification label based on the k nearest neighbors | Thu Jan 2 12:43:06 2014 | 1.6 |
Gewang | Predicts the classification label based on the k nearest neighbors | Mon Jan 6 23:16:30 2014 | 1.9 |
Glenn | Backpropagation performed on a neural network with 1 hidden layers for 3000 iterations. The learning rate was set to 0.1 and the layers (from input to output) contain [ 197 4 1 ] units, including a bias unit for each non-output layer. | Thu Jan 9 23:20:33 2014 | 0.8 |
Glenn | Backpropagation performed on a neural network with 1 hidden layers for 500 iterations. The learning rate was set to 0.1 and the layers (from input to output) contain [ 197 51 1 ] units, including a bias unit for each non-output layer. | Tue Jan 14 12:55:42 2014 | 0.5 |
God | Implements naive Bayes algorithm. | Tue Jan 14 00:19:12 2014 | 5.4 |
God | Implements Basic Decision Stumps and chooses the one which performs the best | Tue Jan 14 14:35:40 2014 | 7.2 |
Green Gmoney Choi | This is an implementation of the AdaBoostalgorithm with decision stumps. | Sun Jan 12 15:10:36 2014 | 0.6 |
Guessing | Minimally outputs a result by applying a random function. | Sat Jan 11 15:26:17 2014 | 48.5 |
Hello! | AdaBoost with Naive Bayes | Tue Jan 7 15:09:21 2014 | 0.7 |
Hello! | Naive Bayes | Thu Jan 9 23:05:21 2014 | 1.9 |
Hello! | AdaBoost with Decision Stumps | Sun Jan 12 23:21:29 2014 | 0.6 |
Hello! | AdaBoost with Naive Bayes(200) | Tue Jan 14 14:19:36 2014 | 0.7 |
Igor Zabukovec | SVM | Thu Jan 9 15:14:08 2014 | 2.6 |
JS | Single-layer Neural Network, 200 epochs, Learning Rate = 0.01 | Thu Jan 9 15:36:09 2014 | 0.6 |
JS | Single-layer Neural Network, 100 epochs, Learning Rate = 0.001 | Tue Jan 14 15:33:47 2014 | 0.8 |
Jake Barnes | Single layer artificial neural network with 125 rounds of training. Learning rate is 0.1 | Wed Jan 8 16:47:50 2014 | 0.8 |
Jake Barnes | Single layer artificial neural network with 125 rounds of training. Learning rate is 0.1 | Thu Jan 9 11:30:42 2014 | 0.8 |
Jake Barnes | Multiple layer artificial neural network (5 hidden nodes) with 125 rounds of training. Learning rate is 0.1 | Mon Jan 13 17:03:22 2014 | 0.5 |
Jameh | A learning algorithm using Adaboost along with decision stumps to determine a classifier to use in future test cases. Give a BinaryDataSet and number of rounds for boosting. | Tue Jan 7 22:52:25 2014 | 0.7 |
Janie Gu | AdaBoost algorithm with decision trees as the weak learner (with a random subset of training examples selected each round by resampling). | Mon Jan 6 15:26:52 2014 | 0.5 |
Jgs | An implementaiton of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 250 | Thu Jan 9 23:27:29 2014 | 0.6 |
Jgs | An implementation of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 20 | Sat Jan 11 13:50:02 2014 | 1.1 |
Jgs | An implementation of AdaBoost with Decision Stumps that is optimized by only using the best possible decision stump for each attribute. Rounds of boosting = 150 | Sat Jan 11 13:56:33 2014 | 0.6 |
Jgs | An implementation of AdaBoost with vanilla Decision Trees. Rounds of boosting = 250 | Sun Jan 12 20:27:00 2014 | 1.6 |
Jgs | An implementation of AdaBoost with vanilla Decision Trees. Rounds of boosting = 150 | Sun Jan 12 22:08:45 2014 | 1.9 |
John Whelchel | Basic implementation of AdaBoost using decision stumps as weak learners and 100 rounds of boosting. | Sun Jan 12 11:32:16 2014 | 5.0 |
John Whelchel | Basic implementation of AdaBoost using decision stumps as weak learners and 150 rounds of boosting. | Sun Jan 12 20:29:33 2014 | 5.0 |
John Whelchel | Basic implementation of AdaBoost using decision stumps as weak learners and 100 rounds of boosting. | Mon Jan 13 20:42:40 2014 | 0.7 |
Jordan | The Adaboost Algorithm with 2000 Decision Stumps | Sun Jan 5 15:21:07 2014 | 0.7 |
Jordan | Adaboost to create a new feature space, then KNN | Sun Jan 5 21:17:16 2014 | 0.4 |
Joshua A. Zimmer | A learning algorithm that uses weightingof the training examples via decision stumps to predict theclassification of the test examples. | Fri Jan 10 04:11:05 2014 | 51.5 |
Joshua A. Zimmer | A learning algorithm that uses weightingof the training examples via decision stumps to predict theclassification of the test examples. | Tue Jan 14 16:00:25 2014 | 4.4 |
Joshua A. Zimmer | A working (hopefully) attempt at a learning algorithm that uses weighting of the training examples via decision stumps to predict the classification of the test examples. | Mon Jan 20 16:17:10 2014 | 29.0 |
K.L. | Decision Stumps | Tue Jan 7 15:29:40 2014 | 9.1 |
K.L. | AdaBoost run with 1000 iterations. | Tue Jan 7 15:33:41 2014 | 0.9 |
K.L. | Single layer neural net, 100 training rounds, learning rate = .01. | Tue Jan 7 15:36:37 2014 | 0.5 |
Katie and George | An implementation of the (voted) perceptron algorithm run for 100 epochs. | Mon Jan 6 21:04:33 2014 | 0.9 |
Katie and George | An implementation of the (voted) perceptron algorithm run for 25 epochs. | Thu Jan 9 20:05:50 2014 | 0.6 |
Khoa | An algo based on decision stumps | Thu Jan 9 23:57:24 2014 | 48.5 |
Khoa | An algorithm that classifies. | Tue Jan 14 14:16:06 2014 | 1.6 |
Kiwis | AdaBoost with decision stumps and 150 iterations. | Thu Jan 9 07:10:52 2014 | 0.9 |
Kiwis | AdaBoost with decision stumps and 90 iterations. | Fri Jan 10 07:39:57 2014 | 0.8 |
Kiwis | AdaBoost with decision trees. Number of iterations: 100. Max depth for tree: 4. | Sun Jan 12 15:46:24 2014 | 0.4 |
Kiwis | AdaBoost with decision trees. Number of iterations: 100. Max depth for tree: 4. | Sun Jan 12 16:18:27 2014 | 0.5 |
L.M. | K-nearest | Thu Jan 9 03:40:10 2014 | 4.1 |
LK | AdaBoost using decision stump | Fri Jan 3 04:38:34 2014 | 0.9 |
LK | Bagging with AdaBoost that uses decision stumps | Mon Jan 6 08:57:03 2014 | 0.9 |
LK | Bagging with AdaBoost that uses decision stumps | Tue Jan 14 12:25:24 2014 | 1.0 |
Learner | Implementation of Adaboost with decision stumps as the weak learner. | Tue Jan 14 15:17:02 2014 | 7.2 |
Learner | Implementation of Adaboost with decision stumps as the weak learner. 100 rounds of boosting. | Tue Jan 14 15:33:13 2014 | 7.2 |
Lil Thug | A simple decision tree algorithm with chi-squared pruning. | Wed Jan 8 15:12:38 2014 | 2.6 |
Linda Zhong | Basic decision tree algorithm implementation , no pruning. | Fri Jan 10 00:47:46 2014 | 49.5 |
Linda Zhong | Basic decision tree algorithm implementation , no pruning. | Sun Jan 12 16:29:37 2014 | 4.6 |
Macrame | Adaboost, decision stumps, 250 rounds | Tue Jan 14 01:55:44 2014 | 0.6 |
Marius | AdaBoost using decision stumps as the weak-learning algorithms. It is run for 200 iterations. | Thu Jan 9 20:13:44 2014 | 99.4 |
Marius | AdaBoost using decision stumps as the weak-learning algorithms. It is run for 200 iterations. | Sun Jan 12 14:09:03 2014 | 0.6 |
Mickey Mouse | A implementation of AdaBoost withDecision Stump as the weak learner and 200 rounds of boosting | Tue Jan 7 15:44:27 2014 | 0.6 |
Mickey Mouse | An implementation of Decision Stump Algorithm | Tue Jan 7 15:47:47 2014 | 7.2 |
Mickey Mouse | An implementation of Naive Bayes | Tue Jan 7 15:50:35 2014 | 1.9 |
Mickey Mouse | An Implementation of Voted Perceptron algorithm with 200 epochs | Wed Jan 8 15:52:53 2014 | 0.8 |
Mike Honcho | Adaboost implementation | Thu Jan 9 21:47:00 2014 | 0.6 |
Mike Honcho | Adaboost implementation | Tue Jan 14 12:23:10 2014 | 3.4 |
Mike Honcho 10 | Adaboost implementation | Tue Jan 14 12:26:40 2014 | 15.4 |
Mike Honcho 100 | Adaboost implementation | Tue Jan 14 12:25:16 2014 | 2.5 |
Mike Honcho 500 | Adaboost implementation | Tue Jan 14 12:28:11 2014 | 1.1 |
Mr. Blobby | AdaBoost (200 rounds) with decision stumps | Thu Jan 9 23:32:26 2014 | 0.6 |
Mr. Blobby | AdaBoost (1000 rounds) with decision stumps | Fri Jan 10 20:11:35 2014 | 0.7 |
Mr. Blobby | AdaBoost (150 rounds) with decision stumps | Fri Jan 10 20:22:29 2014 | 0.6 |
Mr. Blobby | AdaBoost (200 rounds) with decision trees (depth limit of 5) | Sun Jan 12 06:13:26 2014 | 1.0 |
NY | Random forest with 500 iterations | Wed Jan 8 16:53:41 2014 | 1.0 |
NY | Bagged Decision Trees with 500 trees | Wed Jan 8 16:58:45 2014 | 1.8 |
NY | Purifies training set for decision tree (pruning alternative) | Wed Jan 8 17:03:23 2014 | 2.7 |
NY | Decision Tree | Sun Jan 12 15:34:37 2014 | 2.5 |
Nihar the God | Uses Adaboost with decision stumps as weak learners and then uses 150 rounds of boosting | Tue Jan 14 15:07:58 2014 | 1.2 |
Nihar the God | Uses Adaboost with decision stumps as weak learners and then uses 200 rounds of boosting | Tue Jan 14 15:12:20 2014 | 1.5 |
PandaBear | Adaboost on decision stumps, 1000 rounds | Thu Jan 9 18:11:53 2014 | 1.0 |
PandaBear | Adaboost on decision stumps, 500 rounds | Mon Jan 13 22:01:40 2014 | 7.2 |
PandaBear | Adaboost on decision stumps, 1000 rounds | Tue Jan 14 11:09:57 2014 | 7.2 |
R.A.B. | K nearest neighbors with k = 20 | Thu Jan 9 22:26:05 2014 | 1.6 |
R.A.B. | k-NN with K = 21 and votes weighted by the inverse of distance | Tue Jan 14 00:03:23 2014 | 1.6 |
R.A.B. | Adaboost on decision stumps 1000 rounds | Tue Jan 14 02:04:14 2014 | 0.7 |
R.A.B. | Adaboost on decision stumps 150 rounds | Tue Jan 14 03:15:28 2014 | 0.6 |
Ravi Tandon | This algorithm is the implementation of the bootstrap aggregation algorithm. | Mon Dec 23 18:40:45 2013 | 0.9 |
Ravi Tandon | Implementation of Adaboost, using decision stump as weak learning algorithm. | Tue Dec 31 02:31:23 2013 | 0.7 |
Rocky | Nearest Neighbors with weighted vote(weight is inversely proportional to distance), Manhattan distance for normalized attributes, linear scan all examples to find K nearest neighbors(not good for very large training set) | Thu Jan 9 10:11:26 2014 | 1.2 |
Rocky | Bagging algorithm with single layer neural network as the weak learner | Thu Jan 9 20:49:05 2014 | 0.8 |
Rocky | AdaBoost algorithm with the decision stumps as the weak learner, T=150 | Mon Jan 13 21:17:55 2014 | 0.6 |
Rocky | Bagging algorithm with single layer neural network as the weak learner | Tue Jan 14 15:01:01 2014 | 0.6 |
S1 | An implementation of AdaBoost, with decision stumps as the weak learner for the algorithm and 500 rounds of boosting. | Thu Jan 9 15:22:39 2014 | 0.7 |
S1 | Random forests with decision trees (500 trees). | Sat Jan 11 20:12:48 2014 | 0.7 |
S1 | An implementation of AdaBoost, with pruned decision trees as the weak learner for the algorithm and 500 rounds of boosting. | Mon Jan 13 16:46:26 2014 | 0.6 |
SAJE | ADABOOST | Thu Jan 9 15:27:21 2014 | 0.7 |
SAJE | ADABOOST | Thu Jan 9 15:32:14 2014 | 0.6 |
SAJE | ADABOOST | Thu Jan 9 15:35:46 2014 | 0.7 |
SAJE | ADABOOST | Thu Jan 9 15:43:16 2014 | 0.7 |
SAJE | ADABOOST 2 | Thu Jan 9 15:48:58 2014 | 0.7 |
Shaheed Chagani | Naive Bayes Classifier with K-Means Clustering | Wed Dec 18 23:34:38 2013 | 1.0 |
Shaheed Chagani | Naive Bayes Classifier | Sun Jan 12 22:47:10 2014 | 3.2 |
Shaheed Chagani | Naive Bayes Classifier | Mon Jan 13 14:06:46 2014 | 2.0 |
Shaheed Chagani | AdaBoost | Mon Jan 13 21:26:47 2014 | 1.2 |
SkyNet | 1000-iteration AdaBoost with Decision Stump | Thu Jan 9 00:20:25 2014 | 0.7 |
SkyNet | 1000-iteration AdaBoost with Decision Stump | Thu Jan 9 21:22:55 2014 | 0.6 |
Solving From Classifier | The Naive Bayes algorithm executes the maximum-likelihood parameter learning problem and uses the learned parameters (obtained from observed attribute values) to find the maximum-likelihood naive Bayes hypothesis. | Tue Jan 7 13:49:21 2014 | 98.1 |
Solving From Classifier | The Naive Bayes algorithm executes the maximum-likelihood parameter learning problem and uses the learned parameters (obtained from observed attribute values) to find the maximum-likelihood naive Bayes hypothesis. | Tue Jan 7 13:55:17 2014 | 1.9 |
Solving From Classifier | The Naive Bayes algorithm using a binary representation as opposed to a discrete representation. | Sat Jan 11 21:42:29 2014 | 2.1 |
Solving From Classifier | The Naive Bayes algorithm using a binary representation at times an a discrete representation at other times. | Sat Jan 11 21:58:17 2014 | 1.9 |
Squirtle | An implementation of the vanillia decisionstumps classifier | Thu Jan 9 14:27:10 2014 | 7.2 |
Stephen McDonald | A K-nearest neighbours algorithm that predicts a test example by taking a majority vote of the k nearest neighbours, as measured by Manhattan distance (k is set to 1 for this trial). Additionally, this algorithm first converts the attribute types to numeric and normalizes each attribute to have zero mean and unit variance. | Wed Jan 8 18:29:05 2014 | 0.9 |
Stephen McDonald | A voted perceptron classifier | Mon Jan 13 16:29:53 2014 | 1.0 |
Sunnanna | nearest neighbors algorithm with k = 7 | Mon Jan 13 14:49:40 2014 | 1.3 |
Sunnanna | single-layer feedforward neural net usinglogistic function | Mon Jan 13 16:18:07 2014 | 0.7 |
Sunnanna | Naive Bayes Alogrithm using maximum likelihood estimator | Mon Jan 13 18:54:30 2014 | 1.9 |
Supahaka | AdaBoost with Decision Stumps with 100000 rounds of boosting. | Mon Jan 13 00:38:32 2014 | 0.7 |
Supahaka | AdaBoost with Decision Stumps with 500 rounds of boosting. | Tue Jan 14 01:05:08 2014 | 0.7 |
T.C. | Multi-layered Neural Net, 100 iterations, .1 learning rate | Wed Jan 8 04:38:21 2014 | 1.0 |
T.C. | Multi-layered Neural Net, 200 iterations, .1 learning rate | Thu Jan 9 03:05:38 2014 | 0.7 |
Tauriel | RandomForest w/ DecisionTrees | Sat Jan 4 16:30:35 2014 | 0.6 |
Tauriel | AdaBoost w/ DecisionTree | Sun Jan 12 14:11:25 2014 | 0.9 |
Tauriel | RandomForest w/ DecisionTrees | Sun Jan 12 15:23:34 2014 | 0.6 |
Tauriel | RandomForest w/ DecisionTrees | Sun Jan 12 15:40:12 2014 | 0.6 |
Tauriel | RandomForest w/ DecisionTrees pruned at significance level 0.95 | Sun Jan 12 15:51:47 2014 | 0.7 |
The Whitman Whale | Nearest neighbor classification with 17 neighbors and manhattan distance | Thu Jan 9 22:48:28 2014 | 2.1 |
Tiny Wings | Decision tree with chi-square pruning (pruning significance level = 0.01) | Mon Jan 6 05:05:36 2014 | 1.5 |
Tiny Wings | AdaBoost with decision tree algorithm as weak learner (maximum depth of decision trees = 5, chi-square pruning significancel level = 0.01, # of AdaBoost rounds = 200) | Mon Jan 6 05:07:34 2014 | 0.4 |
Valya | Implements neural nets, much like thealgorithm used in W6, with alpha = 0.1 | Thu Jan 9 14:20:29 2014 | 55.6 |
Valya | Implements a single layer neural net, much like thealgorithm used in W6, with alpha = 0.1 | Sun Jan 12 20:44:33 2014 | 58.1 |
Valya | Implements a single layer neural net, much like the algorithm used in W6, with alpha = 0.01, running for 1000 epochs. | Mon Jan 13 22:06:10 2014 | 0.7 |
Wafflepocalypse | A random forest classifier with 1001 trees. | Thu Jan 9 04:53:45 2014 | 0.6 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Trees | Sat Jan 11 18:34:42 2014 | 8.5 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Trees | Sun Jan 12 22:48:11 2014 | 14.3 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Trees | Mon Jan 13 07:30:13 2014 | 2.6 |
Wu-Tang Dynasty | AdaBoost using random sampling and Decision Trees | Mon Jan 13 21:22:25 2014 | 1.7 |
Wu-Tang Dynasty | AdaBoost using random sampling and Nearest Neighbors | Mon Jan 13 22:15:21 2014 | 1.0 |
Wumi | AdaBoost with Decision Stump and 200 boosts | Thu Jan 9 20:52:31 2014 | 0.6 |
akdote | Naive Bayes Algorithm | Thu Jan 9 21:11:57 2014 | 2.0 |
akdote | Naive Bayes Algorithm | Thu Jan 9 23:44:29 2014 | 1.8 |
akdote | Naive Bayes Algorithm | Thu Jan 9 23:49:50 2014 | 2.0 |
anon | vanilla decision tree | Sun Jan 5 11:01:35 2014 | 1.3 |
anon | AdaBoost (using shallow binary decision trees as weak learner) | Sun Jan 5 11:03:19 2014 | 3.3 |
anon5 | An implementation of the AdaBoost algorithm using decision stumps as the learner with 200 rounds of boosting | Thu Jan 2 15:28:13 2014 | 0.6 |
anon5 | An implementation of the AdaBoost algorithm using decision trees with pruning as the learner with 200 rounds of boosting | Fri Jan 3 20:00:03 2014 | 0.5 |
anon5 | An implementation of the AdaBoost algorithm using vanilla decision trees as the learner with 200 rounds of boosting | Fri Jan 3 20:15:31 2014 | 0.6 |
anon5 | An implementation of a decision-tree-learning algorithm with pruning | Fri Jan 3 23:11:54 2014 | 2.6 |
anon5 | An implementation of vanilla decision-tree-learning | Fri Jan 3 23:15:26 2014 | 2.5 |
asdf | A single perceptron (using a logistic threshold) with a learning rate of 0.001 and 100 epochs of training. | Thu Jan 9 23:08:08 2014 | 0.8 |
bcfour | A Naive Bayes approach to classification. | Sat Jan 4 22:45:39 2014 | 1.9 |
bcfour,jkwok | Naive Bayes with standard Laplacian correction | Thu Jan 9 13:03:05 2014 | 1.9 |
bchou | AdaBoost on Binary Decision Stumps. 150 rounds of Boosting | Fri Jan 3 07:09:15 2014 | 0.6 |
bchou | Nearest 7-neighbors | Fri Jan 3 20:02:09 2014 | 1.3 |
bclam | Single-layer Neural Network - 1000 epochs, Learning rate of 0.05 | Wed Jan 8 22:16:33 2014 | 0.7 |
bclam | Single-layer Neural Network - 2000 epochs, Learning rate of 0.01 | Tue Jan 14 07:12:01 2014 | 0.7 |
bclam | Single-layer Neural Network - 10000 epochs, Learning rate of 0.05 | Tue Jan 14 07:20:25 2014 | 0.8 |
bfang | Boosting with decision stumps and early stopping | Sun Dec 29 23:30:13 2013 | 0.8 |
bfang | Bagging with decision trees | Wed Jan 1 11:30:09 2014 | 1.9 |
bfang | Boosting with decision stumps (100 rounds) | Fri Jan 3 21:10:37 2014 | 0.6 |
bfang | Single layer neural network, 80 epochs, alpha=0.01 | Fri Jan 10 12:43:10 2014 | 0.6 |
bfang | Single layer neural network, 80 epochs, alpha=0.01 | Sun Jan 12 13:49:39 2014 | 0.6 |
corgi | basic decision stump | Thu Jan 9 23:35:40 2014 | 7.2 |
corgi2.0 | AdaBoost 150 w/ basic decision stumps | Sat Jan 11 23:14:44 2014 | 1.2 |
corgi3.0 | basic decision tree | Tue Jan 14 02:39:38 2014 | 39.2 |
corgi3.0 | decision tree, discrete attribute splitting | Tue Jan 14 12:55:01 2014 | 27.8 |
corgi4.0 | decision tree with chi squared pruning | Tue Jan 14 04:16:18 2014 | 39.2 |
corgi4.0 | decision tree with chi squared pruning | Tue Jan 14 12:52:06 2014 | 39.2 |
corgi5.0 | decision tree with chi squared pruning | Tue Jan 14 14:26:38 2014 | 39.2 |
dericc, sigatapu | 200-iteration AdaBoost with Decision Trees | Mon Jan 13 16:45:48 2014 | 0.5 |
dlackey | This is an implementation of AdaBoost that uses 175 rounds of boosting. The weak learning algorithm used is a decision stump that directly minimizes the weighted training error. | Wed Jan 8 14:10:54 2014 | 0.6 |
dmmckenn_pthorpe | Basic Stumps Implementation | Thu Jan 9 21:12:17 2014 | 7.2 |
dmmckenn_pthorpe | Implements Naive Bayes using discretization as opposed to continuous values. | Thu Jan 9 21:21:27 2014 | 1.9 |
dmmckenn_pthorpe | Implements Adaboost with 1,000 rounds of boosting with decision stumps as the weak learner. | Thu Jan 9 23:32:58 2014 | 0.7 |
ebp | Adaboost with decision stumps minimizing smoothed weighted training error, 100 rounds of boosting. | Mon Jan 6 21:02:21 2014 | 0.6 |
ebp and Wafflepocalypse | Adaboost on random forests of 30 trees, sampling .65 of the weighted training data with replacement for each hypothesis, 150 rounds of boosting. | Sat Jan 11 02:46:53 2014 | 0.4 |
finn&jake | Knn, K=40, Euclidean distance for numeric and standardized distance for discrete variables; majority vote for nearest neighbors. | Thu Jan 9 14:13:25 2014 | 2.4 |
finn&jake | Knn, K=5, Euclidean distance for numeric and standardized distance for discrete variables; majority vote for nearest neighbors. | Tue Jan 14 13:22:50 2014 | 0.9 |
haoyu | Random Forest with Decision Tree | Fri Dec 27 00:48:21 2013 | 1.6 |
haoyu | Single layer neural network | Mon Dec 30 15:37:54 2013 | 0.8 |
haoyu | AdaBoost with Single Layer Neural Network | Thu Jan 2 17:46:26 2014 | 0.6 |
hb | KNN with L2 distance, k empirically set after cross validation | Thu Jan 9 22:52:35 2014 | 1.0 |
hb | KNN with L2 distance, k empirically set after cross validation | Tue Jan 14 14:50:47 2014 | 0.8 |
hb | AdaBoost, basic decision stumps | Tue Jan 14 15:14:42 2014 | 0.7 |
hb | AdaBoost, KNN as week learner, k chosen empirically | Tue Jan 14 16:02:40 2014 | 0.7 |
hb | AdaBoost, KNN as week learner, k chosen empirically | Tue Jan 14 16:33:16 2014 | 0.7 |
hi | an implementation of AdaBoost woo hoo | Thu Jan 9 23:57:10 2014 | 0.6 |
jabreezy | Adaboost with decision stumps (boosted 200 rounds | Thu Jan 9 23:49:49 2014 | 0.6 |
lilt | An implementation of 10-nearest neighbors | Thu Jan 9 00:23:16 2014 | 1.1 |
lilt | an decision stump implementation | Mon Jan 13 15:03:39 2014 | 51.4 |
lolz | Adaboost with decision stumps as the weak learner algorithm (k = 200) | Thu Jan 9 00:39:23 2014 | 0.6 |
mdrjr | This is an implementation of k nearest neighbors. I've played around with both k and the distance function. | Thu Jan 9 20:04:54 2014 | 1.4 |
me | Adaboost using decision stumps and 400 rounds of boosting. | Thu Jan 9 23:18:51 2014 | 0.6 |
null | null | Thu Jan 2 21:29:49 2014 | 48.5 |
qshen | An implementation of AdaBoost that uses a weak learner that chooses the decision stump that minimizes the weighted training error and is iterated 500 times. | Mon Dec 30 13:51:56 2013 | 0.6 |
sabard | A decision tree learning algorithm. | Thu Jan 9 23:04:47 2014 | 48.5 |
sabard | A decision tree learning algorithm with chi squared pruning. | Tue Jan 14 05:27:30 2014 | 1.8 |
sabard | A decision tree learning algorithm with chi squared pruning (5%). | Tue Jan 14 15:32:46 2014 | 1.8 |
sabard | Decision Stump weak learning algorithm to be used with AdaBoost | Tue Jan 14 15:39:42 2014 | 7.2 |
skarp | AdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 200 rounds of boosting. | Sat Dec 28 16:13:58 2013 | 0.6 |
skarp | AdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 300 rounds of boosting. | Tue Jan 14 12:07:17 2014 | 0.7 |
skarp | AdaBoost with decision stumps as the weak learner (chosen to minimize the weighted training error) and 100 rounds of boosting. | Tue Jan 14 12:12:07 2014 | 0.7 |
skarp | AdaBoost with decision trees as the weak learner (chosen to minimize the entropy, where each tree is restricted to a maximum depth of 4) and 200 rounds of boosting. | Tue Jan 14 12:29:29 2014 | 0.4 |
skarp | AdaBoost with decision trees as the weak learner (chosen to minimize the entropy, where each tree is restricted to a maximum depth of 5) and 200 rounds of boosting. | Tue Jan 14 12:38:06 2014 | 0.5 |
sm | AdaBoost, using pruned Decision Trees as the weak learner. | Thu Jan 9 19:13:02 2014 | 0.6 |
sm | AdaBoost, using pruned Decision Trees as the weak learner. | Mon Jan 13 15:35:01 2014 | 0.4 |
tenrburrito | AdaBoost algorithm with Decision Trees as weak learning algorithm | Thu Jan 9 15:24:10 2014 | 49.1 |
tenrburrito | AdaBoost algorithm with Decision Trees as weak learning algorithm | Tue Jan 14 16:38:21 2014 | 0.5 |
vluu | An attempt at AdaBoost with Naive Bayes | Thu Dec 26 21:36:21 2013 | 1.1 |
vvspr | An implementation of the Naive Bayes Algorithm | Tue Dec 31 13:37:34 2013 | 1.5 |
weezy | Implements AdaBoost using decision stumps as a weak learner and running for 1000 rounds of boosting. | Wed Jan 8 23:17:50 2014 | 0.7 |
weezy | Implements a k-Nearest Neighbor algorithm with k = 15. | Thu Jan 9 20:02:27 2014 | 3.9 |
weezy | Implements a k-Nearest Neighbor algorithm with k = 27. | Sat Jan 11 00:38:42 2014 | 3.5 |
ytterbium | AdaBoost with decision stumps. (1000 rounds) | Thu Jan 9 20:00:48 2014 | 0.7 |
Table generated: Mon Jan 20 16:17:16 2014