Uncategorized – Ideas For Dozens http://urbanhonking.com/ideasfordozens Wed, 30 Mar 2016 22:39:34 +0000 en-US hourly 1 Research Proposal: Accounting for Anchoring Bias on User Labeling in Machine Learning Systems http://urbanhonking.com/ideasfordozens/2013/11/01/research-proposal-accounting-for-anchoring-bias-on-user-labeling-in-machine-learning-systems/ http://urbanhonking.com/ideasfordozens/2013/11/01/research-proposal-accounting-for-anchoring-bias-on-user-labeling-in-machine-learning-systems/#respond Fri, 01 Nov 2013 14:27:45 +0000 http://urbanhonking.com/ideasfordozens/?p=770 Continue reading ]]> (This post describes my proposed final research project for MAS.S62 Interactive Machine Learning at the MIT Media Lab.)

Description

Many interactive machine learning systems ask users to make sequences of judgments while training. For example, recommender systems often prompt the user to rate a series of items in a single session. Most systems assume that such judgments are stable over time and across varying conditions. However, there is extensive evidence from psychology that such judgments are subject to anchoring effects. First demonstrated by Kahneman and Taversky, “anchoring” describes how subjects make judgments by adjusting away from an initial piece of information rather than based on a consistent scale.

I propose to explore the intuition that users of interactive machine learning systems are subject to anchoring bias and that accounting for such bias can improve the results of interactive machine learning systems.

Specifically I propose to look for statistical evidence of anchoring bias in existing sequentially labeled data sets such as the Netflix Prize set of movie rankings. Secondarily, I propose to explore the design of a novel interactive machine learning system that takes anchoring bias as its starting point.

Motivation

Understanding the basis of user decision-making is essential to the design of effective interactive machine learning systems. Over the last 40 years, the psychology of judgment and decision-making has cataloged many cognitive biases that affect the kind of evaluations machine learning systems ask of their users. This body of research has yet to significantly impact the machine learning research community. Most machine learning systems treat the preferences and judgments expressed through user labels as consistent across time and varying conditions. If, instead, these labels are swayed by the users’ cognitive biases as the psychological literature suggests, measuring the effect of these biases and accounting for them in the design of our algorithms and interactive systems could make a significant impact on the quality of the results of machine learning applications.

Contribution Type

Understanding Users

Preliminary Bibliography

Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science. no. 4157 (1974): 1124–1131.

Adomavicius, Gediminas, Jesse Bockstedt, Shawn Curley, and Jingjing Zhang. “Recommender systems, consumer preferences, and anchoring effects.” In RecSys 2011 Workshop on Human Decision Making in Recommender Systems, pp. 35–42. 2011.

Koren, Yehuda. “The bellkor solution to the netflix grand prize.” Netflix prize documentation (2009).

Cardie, Claire. “A cognitive bias approach to feature selection and weighting for case-based learners.” Machine Learning 41, no. 1 (2000): 85–116.

Cardie, Claire. “Integrating case-based learning and cognitive biases for machine learning of natural language.” J. Exp. Theor. Artif. Intell. 11, no. 3 (1999): 297–337.

Slovic, Paul. “The construction of preference.” American psychologist 50, no. 5 (1995): 364.

Carenini, Giuseppe, and David Poole. “Constructed preferences and value-focused thinking: implications for AI research on preference elicitation.” In AAAI–02 Workshop on Preferences in AI and CP: symbolic approaches, pp. 1–10. 2002.

Herbrich, Ralf, Thore Graepel, Peter Bollmann-Sdorra, and Klaus Obermayer. “Learning preference relations for information retrieval.” In ICML–98 Workshop: text categorization and machine learning, pp. 80–84. 1998.

Carterette, Ben, and Desislava Petkova. “Learning a ranking from pairwise preferences.” In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 629–630. ACM, 2006.

]]>
http://urbanhonking.com/ideasfordozens/2013/11/01/research-proposal-accounting-for-anchoring-bias-on-user-labeling-in-machine-learning-systems/feed/ 0
Random Decision Forest’s Interaction Affordances http://urbanhonking.com/ideasfordozens/2013/10/25/random-decision-forests-interaction-affordances/ http://urbanhonking.com/ideasfordozens/2013/10/25/random-decision-forests-interaction-affordances/#comments Fri, 25 Oct 2013 14:37:01 +0000 http://urbanhonking.com/ideasfordozens/?p=760 Continue reading ]]> (This post responds to an assignment for MAS.S62 Interactive Machine Learning at the MIT Media Lab to analyze the input and output channels of a machine learning algorithm for their potential as affordances for interaction.)

When examined for its potential for interaction affordances, Random Decision Forests (Breiman 2001) distinguishes itself from other machine learning algorithms in its potential for transparency. Due to the nature of the algorithm, most Random Decision Forest implementations provide an extraordinary amount of information about the final state of the classifier and how it derived from the training data.

In this analysis, I discuss five outputs that are available from a Random Decision Forest and ways they could be used to provide interface or visualization options for a layman user of such a classifier. I also describe one input that could be similarly useful.

(For each output and input, I provide a link to the corresponding function in the OpenCV Random Decision Forest implementation. Other implementations should also provide similar access.)

Output: variable importance

In addition to returning the classification result, most Random Decision Forest implementations can also provide a measure of the importance that each variable in the feature vector played in the result. These importance scores are calculated by adding noise to each variable one-by-one and calculating the corresponding increase in the misclassification rate.

Presenting this data to the user in the form of a table, ranked list, or textual description could aid in feature selection and also help improve user understanding of the underlying data.

OpenCV’s implementation: CvRTrees::getVarImportance()

Output: proximity between any two samples

A trained Random Decision Forest can calculate the proximity between any two given samples in the training set. Proximity is calculated by comparing the number of trees where the two samples ended up in the same leaf node to the total number of trees in the ensemble.

This proximity data could be presented to the user of an interactive machine learning system in order to both improve the user’s understanding of the current state of training and to suggest additional labeled samples that would significantly improve classification. By iteratively calculating the proximities of each pair of samples in the training set (or a large subset of these) a system could produce a navigable visualization of the existing training samples that could significantly aid the user in identifying mis-labeled samples, crafting useful additional samples, and understanding the causes of the system’s predictions.

OpenCV’s implementation: CvRTrees::get_proximity()

Output: prediction confidence

Due to the ensemble structure of a Random Decision Forest, the classifier can calculate a confidence score for its predictions. The confidence score is calculated based on the proportion of decision trees in the forest that agreed with the winning classification for the given sample.

This confidence could be presented to a user in multiple different ways. A user could set a confidence threshold below which predictions should be ignored; the system could prompt the user for additional labeled samples whenever the confidence is too low; or the confidence could be reflected in the visual presentation of the prediction (size, color, etc) so that the user can take it into consideration.

OpenCV’s implementation: CvRTrees::predict_prob() (Note: OpenCV’s implementation only works on binary classification problems.)

Output: individual decision trees

Since Random Decision Forest is usually implemented on top of a simpler decision tree classifier, many implementations provide direct access to the individual decision trees that made up the ensemble.

With access to the individual decision trees, an application could provide the user with a comprehensive visualization of the Forest’s operation including showing the error rates for the individual trees and the variable on which each tree made each split. This visualization could aid in feature selection and in-depth evaluation and exploration of the quality of the training set.

OpenCV’s implementation: CvRTrees::get_tree()

OUTPUT: calculate training error

Since Random Decision Forests store each of their training samples internally as they construct their decision trees, unlike many other machine learning methods, they can evaluate their own training error after the completion of training. On classification problems, this error is calculated as the percentage of mis-classified training samples, in regression problems it is the mean square of the errors.

This error metric is simple enough that it could be shown to an end-user as a basic form of feedback on the current state of training quality. However, without other metrics, this would create the danger of encouraging the user to work towards overfitting the training sample.

CvRTrees::get_train_error() (Note: OpenCV’s implementation only works on classification problems.)

Input: Max number of trees in the forest

The most important input for a user to a Random Decision Forest is the maximum number of trees allowed in the forest. Up to the point of diminishing returns, this is essentially a proxy for the trade-off between training time and result quality.

This could be presented to the user as a slider, allowing them to choose faster training or better results throughout the process of interactively improving a classifier.

OpenCV’s implementation: CvDTreeParams::set_max_depth()

]]>
http://urbanhonking.com/ideasfordozens/2013/10/25/random-decision-forests-interaction-affordances/feed/ 2
An ofxaddons.com Update http://urbanhonking.com/ideasfordozens/2012/07/11/an-ofxaddons-com-update/ http://urbanhonking.com/ideasfordozens/2012/07/11/an-ofxaddons-com-update/#respond Wed, 11 Jul 2012 22:02:03 +0000 http://urbanhonking.com/ideasfordozens/?p=665 Continue reading ]]> James George and I just ran an epic update to ofxaddons.com, the site we run that indexes extensions and libraries for OpenFrameworks. With the recent version switchover of the Github API, our addon detection code had gotten out of date and hadn’t run in a few weeks.

It’s fixed now and in running it we found a whole raft of new awesome addons, bringing the grand total up to 564 known addons. In this post I’ve collected a few of the addons that struck my eye as being exciting, though there are a bunch more. You can follow the ofxaddons changes page to keep up.

Let’s start off with two addons that have cool images to share. First is the very clever ofxRetroPixel by Akira Hayaska. ofxRetroPixel converts hi-res graphics into low-res retro pixels “like a 70s pong game”. Here’s a sample of the results:

ofxRetroPixel

Another neat visual addon is ofxSparklines by Christopher Baker. ofxSparklines produce small line graphs from data with a lot of options for visual refinement:

Seems like a great tool for adding some visual feedback to an app.

Another exciting (and timely) addon is ofxLytroFileTools from Jason Van Cleave, which lets you parse and view fiels from the Lytro re-focusable camera in OF.

The next two addons were enough to inspire us to add a new category to the site: Machine Learning. ofxSelfOrganizingMap implements an unsupervised machine learning algorithm that has applications in data clustering, compression, and pattern recognition. The addon’s readme has some awesome examples, including a visualization of the colors of the seasons created by clustering images from Google image search:

And a visual clustering of countries via UN poverty statistics:

Another machine learning addon, ofxSequence provides classification and recognition of numeric sequences, a technique that can be used in a lot of applications, including gesture recgnition. This is a classifier that you train with some example data. and then it uses a hidden markov model to recognize patterns in that data.

Two last addons I wanted to mention, both relating to computer vision and tracking people.

Joel Gethin Lewis has an new addon for background removal ofxBackground based on a classic example from Learning OpenCV.

And, last but not least, Chris O’Shea made ofxThermitrack, an OF interface to the Thermitrack thermal imaging camera, which provides “high-resolution position data from people moving within its field of view” (and looks like a smoke detector):

There’s also been a lot of excellent development on existing addons as well as some intriguing-looking projects that aren’t quite ready for release yet. You can find all of it at ofxaddons.com/changes.

]]>
http://urbanhonking.com/ideasfordozens/2012/07/11/an-ofxaddons-com-update/feed/ 0
Teaching Makematics at ITP http://urbanhonking.com/ideasfordozens/2012/04/27/teaching-makematics-at-itp/ http://urbanhonking.com/ideasfordozens/2012/04/27/teaching-makematics-at-itp/#respond Fri, 27 Apr 2012 05:31:08 +0000 http://urbanhonking.com/ideasfordozens/?p=640 Continue reading ]]> I’m proud to announce that I’ll be teaching a class a NYU ITP next semester. The class grew out of my work on Makematics; it’s called “Makematics: Turning Computer Science Research into Creative Tools”. Here’s the full description from the Fall 2012 course listings (which will sound somewhat familiar to you if you read my intro to Makematics):

Artists build on top of science. Today’s cutting edge math and computer science research becomes tomorrow’s breakthrough creative projects.

Computer vision algorithms, machine learning techniques, and 3D topology are becoming vital prerequisites to doing daily work in creative fields from interactive art to generative graphics, data visualization, and digital fabrication. If they don’t grapple with these subjects themselves, artists are forced to wait for others to digest this new knowledge before they can work with it. Their creative options shrink to those parts of this research selected by Adobe and Autodesk for inclusion in prepackaged tools.

This class is designed to help you start seizing the results of this research for your own creative work. You’ll learn how to explore the published academic literature for techniques that will help you build your dream projects. And you’ll learn how to use those techniques to make those projects a reality.

Each week we’ll explore a technique from one of these research fields. We’ll learn to understand the original research and see how to implement it in code that you can use in your projects. You’ll learn to use the marching squares algorithm to detect fingers or make 3D models into something you can laser cut. You’ll learn how to use support vector machines to train your own object detector or analyze a body of text. We’ll cover a series of such topics, each of which has a wide range of applications in different creative media.

I’m still working on finalizing exactly the technical topics I’ll be covering. So far I have units planned on Marching Squares, Support Vector Machines, and Principle Component Analysis. I’m looking for a good topic in probability (and am open to suggestions). I’ll be teaching the class in Processing and producing libraries that facilitate each of these techniques (in fact, I’ve already started).

In addition to the motivations for this topic mentioned in the class description above, I also have another pet reason why I think this material matters. I hope this type of curriculum might be the start of something like an applied version of the New Aesthetic, teaching a set of skills and a body of knowledge that can move us beyond simply goggling at the output of drone vision systems, poetic spambots, and digitally fabricated high heels into deeply understanding the cluster of technologies that produce them and, in turn, using that understanding to produce things of our own. There’s no way a single 7-week class can hope to make more than a small start at a project like that, but a start is what comes first.

]]>
http://urbanhonking.com/ideasfordozens/2012/04/27/teaching-makematics-at-itp/feed/ 0
Paperless Post Tech Talk http://urbanhonking.com/ideasfordozens/2012/04/26/paperless-post-tech-talk/ http://urbanhonking.com/ideasfordozens/2012/04/26/paperless-post-tech-talk/#respond Thu, 26 Apr 2012 14:51:24 +0000 http://urbanhonking.com/ideas/?p=638 Continue reading ]]> A couple of weeks ago I delivered the inaugural tech talk at Paperless Post. I was invited by Paperless Post CTO Aaron Quint who’s been a friend for a long while.

Aaron asked me to talk about my work with the Kinect and anything else that was on my mind. I took the opportunity to talk about two current projects. One of these, Makematics, I’d launched just that week, but I haven’t talked about here much. It’s a project dedicated to turning computer science research into tools for creative work. I have more to announce on that topic shortly, but you can read my introductory post in the meantime.

The second project is one that’s not quite finished and this was the first time I’d talked about it publicly at all. It’s a design exploration into using faces as computer vision markers instead of abstract shapes. I call it You-R-Codes. I’ll have a more thorough presentation of it here soon so consider this a sneak preview.

Thanks to everyone who came out to the talk. It was a big friendly crowd with lots of great questions and discussion afterwards. And, of course, thanks to Aaron and the other folks at Paperless Post for inviting me and treating me so well. It was a good time.

Here are the slides:

]]>
http://urbanhonking.com/ideasfordozens/2012/04/26/paperless-post-tech-talk/feed/ 0
The Era of Big VC is Over http://urbanhonking.com/ideasfordozens/2010/11/05/the-era-of-big-vc-is-over/ http://urbanhonking.com/ideasfordozens/2010/11/05/the-era-of-big-vc-is-over/#respond Fri, 05 Nov 2010 22:46:09 +0000 http://urbanhonking.com/ideasfordozens/?p=747 Continue reading ]]>

The US venture capital industry is a transient epiphenomenon built upon the greatest bull market in the history of capitalism.

— William Janeway, Warburg Pincus

The venture capital industry underwent an epochal expansion around the year 2000 in the wake of the internet boom. In response to the incredible success of web companies, the amount of money invested in VC funds increased ten-fold before falling rapidly back to pre-boom levels after the crash.1

The result was a decade in which an unprecedented quantity of capital was available to technology entrepreneurs as venture capitalists invested the money raised during the boom. This period has now come to an end. Startups in the next ten years will have an order of magnitude less money available to them during the early stages of their development. This change will have dramatic effects on the kinds of opportunities available to entrepreneurs and technologists and on the kinds of change effected by the products they develop.

The History and Structure of Venture Capital Firms

Since its advent in the late 40s2, the venture capital industry has provided essential support to companies developing and commercializing new technology. Venture firms provide capital to startup businesses lacking access to traditional bank loans due to the unproven or risky nature of their enterprise. VCs played a central role in developing and scaling up many important technologies in the 20th century from the semiconductor to the personal computer and the web.3

The size of the industry grew dramatically in the 80s and 90s. During the PC boom in the 80s the amount of money VC firms had under management grew from $3 billion to $31 billion dollars and the number of operating firms shot up to 650.4 This expansion then accelerated during the late 90s dot com bubble when, in 2000, VCs raised nearly the previous total holdings of the industry in each quarter, spiking the total funds up to $230 billion in just a few years.5

Quarterly venture capital investment data courtesy of the National Venture Capital Association

After 2000, fundraising returned to pre-boom levels with around $9 billion raised so far in 2010.6

In other words, in both the 80s and 90s, the great body of venture capital investment acted as a trailing indicator of technical innovation — first following opportunity in personal computers after its early successes had become obvious and then doing likewise with the web.

Most venture capital funds are structured as private partnerships with a fixed term of seven to ten years.7 Funds spend the first half of their terms selecting companies for investment and the second half shepherding their companies through “liquidity events”, acquisitions or Initial Public Offerings which end the ownership of the company by the fund and provide the investors with a return on their money.

The combination of this sudden spike in dollars invested and this fixed-term structure resulted in a unique decade-long period in which a lump of cash on the order of $200 billion worked its way through the startup ecosystem. Much of our conventional wisdom about the culture and economics of startups are based on these conditions and are unlikely to hold up well outside of them.

The Last Ten Years: No Exit

In addition to a surplus of venture capital funding there were three other major economic factors shaping the startup ecosystem in the last decade: the significantly lower cost of starting an online business, the weakness of the Initial Public Offering market, and the flight of institutional investors to other more attractive forms of high risk capital. Together they formed a kind of Bermuda Triangle for the VC industry: many entered, but few who did were ever seen again.

During the web boom both hardware and software costs were high for nascent startups. Web server software was mostly proprietary and hence involved large licensing fees. For example in 1995, a single license for Windows NT Server 3.5 cost $1,495.8 Worse still, typical server hardware for running this operating system would have cost around $15,500.9

Today, open source software, cheap, extremely capacious commodity hardware, and the use of “cloud computing” services such as Amazon Web Services have driven both software and hardware costs down substantially,10 a situation which reached a notable data point recently when Amazon announced a free pricing tier for its EC2 service. This lowering of costs has reduced startups’ demand for venture capital. Since it no longer takes millions of dollars up front to start a web company, founders are less motivated to sell off equity in exchange for funding.

On the other side of the equation, venture funds found fewer opportunities to profitably exit from their investments. In the 80s and 90s VCs had two exit strategies: acquisition by existing large companies and the issuing of public stock. Due to a murky combination of factors including increased regulation of publicly owned companies enacted by Sarbanes-Oxley in 200211, the number of venture-backed technology IPOs was down precipitously in the past decade. Between 1995 and 2000 the market averaged 204.5 IPOs per year while the next seven years saw an average of just 49.12

Taken together, these two factors meant a large decrease in profit-making for Venture Capital firms, down from 85% return on investment for an average firm during the bubble years to -3% in the last five.13 This catastrophic drop-off has caused venture firms to issue far fewer funds and some firms to close altogether.14

And, to make the situation even worse for VC firms, institutional investors have begun withdrawing their vast sums under management from the VC industry. During the 80s and 90s these investors gradually increased their investment in VC contributing significantly to the build-up of VC funds. But after the crash of 2008 made them decidedly more risk averse, they have fled for safer options.15

What About Biotech?

One objection to this rather bleak portrait of the prospects for venture capital might be to argue that the problem is constrained to the web. A case might be made that the web has reached maturity as a technology and hence no longer needs VC investment to develop or scale its technology and so we should look for bright spots for venture investors in other sectors.

A prime candidate for such a sector would be biotechnology. And VC investment in biotechnology has increased since 2000 both as an absolute number and as a portion of total VC funds, growing from a mere 7.5% in 2000 to 34.6% in 2009.16

Life Sciences Investment data courtesy of the National Venture Capital Association

However, there are reasons to doubt venture capital’s effectiveness in supporting biotechnology. William Janeway, a Managing Director and Senior Advisor at venture firm Warburg Pincus has argued that the long time it takes for life science innovations to progress from lab to clinic combined with the high rate of attrition and the lack of positive cash flow for early investors makes the biotech sector a tough environment for venture capitalists.17 Further the seeming profitability of the industry seems to have been artificially buoyed by the presence of a single highly profitable company: “if we take the largest and most profitable firm Amgen, out of the sample the industry has sustained heavy losses throughout its history.”18

Finally, counterintuitively, the prospects for profitability by biotech startups may actually be getting worse over time. “The field gets more complicated every year (from the Central Dogma, to Epigenetics, to Systems Biology),” Janeway says. “The big winners in BioTech came from the synthetic production in bacteria of naturally occurring proteins (insulin, human growth hormone) whose function had evolved over millions of years; the effort to produce truly novel therapeutics (e.g., fragments of protein) has been radically less productive. This suggests that it will get harder, not easier to make money in Biotech.”19

Conclusion

The state of the venture capital industry affects the kinds of startups that get founded and, through them, the types of jobs that are available in technology-related industries as well as the kinds of technologies that reach scale and affect our daily lives. In the 80s, when venture capital supported the personal computer industry, the jobs were at Microsoft, Apple, and Dell doing hardware and operating systems engineering. In the 90s, when VC supported the web, the jobs were at Amazon and Ebay and short-lived very well-funded flashes in the pan like Pets.com. They mainly encompassed server programming. In the last decade, VC massively supported social media sites like Facebook, Twitter, and Flickr. Most of the jobs at those companies were in still softer fields such as customer support, design, and marketing. In many ways these startups are more like media companies than the technology companies of the past.

What will the startups that thrive in the low-VC environment of the next decade look like? What will it feel like to work for web companies that bootstrap their own growth through revenue rather than taking big investment up front? Which technologies will fail to achieve wide adoption for lack availability of big up front investment? Will another non-web industry come along and initiate a new boom and another trailing order-of-magnitude growth in VC investment?

Only time will tell.

Notes
  1. Based on quarterly fundraising data gathered by the National Venture Capital Association. The data is available interactively on the NVCA website. [back]
  2. Georges Doriot founded the first venture firm, American Research and Development Corporation, in 1946. The firm’s greatest success was Digital Equipment Corporation, a leader in the development of the minicomputer. Spencer Ante, Creative Capital: Georges Doriot and the Birth of Venture Capital, Harvard Business School Press, 2008. [back]
  3. Legendary silicon valley firm Venrock Capital alone invested in Fairchild Semiconductor, Intel, and Apple. See the Venrock company timeline. Similarly Kleiner Perkins Caufield and Byers supported many of the most important web companies from Netscape to Google. Kleiner Perkins Caufield and Byers portolio. [back]
  4. Andrew Pollack, “Venture Capital Loses Its Vigor”,
    The New York Times, October 8, 1989. [back]

  5. The total numbers under holding come from NVCA data sent to me via email by William Janeway, Managing Director and Senior Advisor at
    Warburg Pincus. [back]

  6. ibid [
    back]
  7. Jennifer A. Post, “An overview of US venture capital funds”, Alt Assets, 2001. [back]
  8. Jason Pontin,”Windows NT starts to pick up steam” InfoWorld. January 30, 1995 [back]
  9. When InfoWorld ran a web server comparison test in April of 1996, they used “a Compaq 1500/5133 Prliant with two 133-MHz Pentiums, 6GB of storage, 128MB of RAM and two Netflex 3P network interface cards” for their NT tests, a system which clocked in at $15,569 and was still less expensive than its Sun/Unix equivalent which cost $22,995. “Are You Being Served?” InfoWorld. April 8, 1996 [back]
  10. Guy Kawaski, “By the Numbers: How I built a Web 2.0, User-Generated Content, Citizen Journalism, Long-Tail, Social Media Site for $12,107.09”,
    blog.guykawaskai.com.June 03, 2007 [back]
  11. Opinions differ on the effect of “SOX” on the IPO market. See Amy Feldman, “What Does Sarbanes-Oxley Mean for Companies That Want to Go Public?”, Inc. September 1, 2005. for pro and Lynn Stephens and Robert G. Schwartz “The Chilling Effect of SARBANES-OXLEY: Myth or Reality?” The CPA Journal. June 2006. for con. [back]

  12. Jay Ritter,
    IPO Data. Venture Expert, Thomson Financial. [back]

  13. Data sent to me via email by William Janeway of
    Warburg Pincus. [back]

  14. ibid. Pincus’s numbers place the number of funds issued at 653 in 2000 and 124 in 2010 for a decline of 94%. [
    back]

  15. Udayan Gupta “Why Institutional Investors Are Turning Down Venture Funds”,
    Institutional Investor. September 21, 2010. [back]

  16. Data sent to me via email by William Janeway of
    Warburg Pincus [back]

  17. Email conversation with the author. [
    back]

  18. Gary P. Pisano, “Science Business”, HBS Press. 2006. p.117 [
    back]

  19. Email conversation with the author. [
    back]
]]>
http://urbanhonking.com/ideasfordozens/2010/11/05/the-era-of-big-vc-is-over/feed/ 0