Comments on: Random Decision Forest’s Interaction Affordances http://urbanhonking.com/ideasfordozens/2013/10/25/random-decision-forests-interaction-affordances/ Thu, 19 Jun 2014 09:26:37 +0000 hourly 1 By: greg http://urbanhonking.com/ideasfordozens/2013/10/25/random-decision-forests-interaction-affordances/#comment-24992 Fri, 01 Nov 2013 14:24:26 +0000 http://urbanhonking.com/ideasfordozens/?p=760#comment-24992 Thanks for the comment, Joe, and especially for posting the SciKit functions. As mentioned at the top of the post, I wrote this as part of a class I’m in at the MIT Media Lab on Interactive Machine Learning. IML is a pretty new field and it sits exactly at the intersection between ML and HCI you’re describing. I decided to post my work publicly despite knowing that it’s technical enough to be difficult to access for most readers in the hopes that it would at least be useful to some. At this point, the obstacles to even the most-basic uses of machine learning are still quite high for most people (something I’m working on through my efforts to add wrappers for OpenCV’s machine learning functions to my OpenCV library for Processing).

]]>
By: Joe McCarthy http://urbanhonking.com/ideasfordozens/2013/10/25/random-decision-forests-interaction-affordances/#comment-24749 Tue, 29 Oct 2013 14:43:07 +0000 http://urbanhonking.com/ideasfordozens/?p=760#comment-24749 Interesting collection of potentially visualizable aspects of Random Forests. While these seem like they would be very useful to someone with sufficient background in the use of machine learning tools, I wonder how many of them would be easily understandable to the uninitiated. Seems like a fruitful area for collaboration between ML and HCI researchers.

I have not used OpenCV, but I’ve recently been working with Scikit-Learn’s open source Python-based implementation of several machine learning algorithms, including Random Forest Classifier. FWIW, 2 of the 5 output elements mentioned above, and the input element, are currently supported by this implementation; I suspect the other elements may currently be hidden in the code, but would have to dig around further to see whether / how they might be exposed.

Output: variable importance
sklearn attribute: feature_importances_

Output: prediction confidence
sklearn method: predict_probas(X)

Input: Max number of trees in the forest
sklearn parameter: n_estimators

]]>