The Future of Machine Learning (and the End of the World?)

The Future of Machine Learning (and the End of the World?)

On Thursday (Oct 25) we had an event called the ML Futuristic Panel Discussion. The panelists were Ziv Bar-Joseph, Steve Fienberg, Tom Mitchell Aarti Singh and Alex Smola.

Ziv is an expert on machine learning and systems biology. Steve is a colleague of mine in Statistics with a joint appointment in ML, Tom is the head and founder of the ML department, Aarti is an Assistant Professor in ML and Alex, who is well known as a pioneer in kernel methods, is joining us as a professor in ML in January. An august panel to say the least.

The challenge was to predict what the next important breakthroughs in ML would be. It was also a discussion of where the panelists thought ML should be going in the future. Based on my notoriously unreliable memory, here is my summary of the key points.

1. What The Panelists Said

Aarti: ML is good at important but mundane tasks (classification etc) but not at higher level tasks like thinking of new hypotheses. We need ML techniques that play a bigger role in the whole process of making scientific discoveries. The more machines can do, the more high level tasks humans can concentrate their efforts on.

Ziv: There is a gap between the advances in systems biology and its use on practical problems, especially medicine. Each person is a repository of an unimaginable amount of data. An unsolved problem in ML is how to use all the knowledge we have developed in systems biology and use it for personalized medicine. In a sense, this is the problem of bridging information at the cell level and information at the level of an individual (consisting of trillions of interacting cells).

Steve: We should not forget the crucial role of intervention. Experiments involve manipulating variables. Passive ML methods are only part of the whole story. Statistics and ML methods help us learn, but then we have to decide what experiments to do, what interventions to make. Also, we have to decide what data to collect; not all data are useful. In other words, the future of ML has to still include human judgement.

Tom: He joked that his mother was not impressed with ML. After all, she saw Tom grow from an infant who knew nothing, to and adult who can do an amazing number of things. Tom says we need to learn how to “raise computers” in analogy to raising children. We need machines that can learn how to learn. An example is the NELL project (Never Ending Language Learning) which Tom leads. This is a system which has been running since January 2010 and is learning how to read information from the web. See also here. Amazing stuff.

Alex: More and more, computing is done on huge numbers of highly connected inexpensive processors. This raises many questions about how to design algorithms. There are interesting challenges for systems designers, ML people ad statisticians. For example: can you design an estimator that can easily be distributed with little loss of statistical efficiency and that is highly tolerant to failures of small numbers of processors?

2. The Future?

I found the panel discussion very inspiring. All of the panelists had interesting things to say. There was much discussion after the presentations. Martin Azizyan asked (and I am paraphrasing), “Have we really solved all the current ML problems?” The panel agreed that, no, we have not. We need to keep working on current problems (even if they seem mundane compared to the futuristic things discussed by the panel). But we can also work on the next generation of problems at the same time.

Discussing future trends is important. But we have to remember that we are probably wrong about our predictions. Neils Bohr said “Prediction is very difficult, especially about the future.” And as Yogi Berra said, “The future ain’t what it used to be. ”

When I was a kid, it was routinely predicted that, by the year 2000, people would fly to work with jetpacks, we’d have flying cars and we’d harvest our food from the sea. No one really predicted the world wide web, laptops, cellphones, gene microarrays etc.

3. The Return of AI

But, I’ll take my chances and make a prediction anyway. I think Tom is right: computers that learn in ways closer to the ways humans learn is the future.

When I was in London in June, I had the pleasure to meet Shane Legg, from Deepmind Technologies. This is a startup that is trying to build a system that thinks. This was the original dream of AI.

As Shane explained to me, the has been huge progress in both neuroscience and ML and their goal is to bring these things together. I thought it sounded crazy until he told me the list of famous billionaires who have invested in the company.

Which raises an interested question. Suppose someone — Tom Mitchell, the people at Deepmind, or someone else — creates a truly intelligent system. Now they have a system as smart as a human. But all they have to do is put the system on a huge machine with more horsepower than a human brain. Suddenly, we are in the world of super-intelligent computers surpassing humans.

Perhaps they’ll be nice to us. Or, it could turn into Robopocalypse. If so, this could mean the end of the world as we know it.

By the way, Daniel Wilson, the author of Robopocalypse, was a student at CMU. I heard rumours that he kept a picture of me on his desk to intimidate himself to work hard. I don’t think of myself as intimidating so maybe this isn’t true. However, the book begins with a character named Professor Wasserman, a statistics professor, who unwittingly unleashes an intelligent program that leads to the Robopocalypse.

Steve Speilberg is making a movie based on the book, to be released April 25 2104. So far, I have not had any calls from Speilberg.

So my prediction is this: someone other than me will be playing Professor Wasserman in the film adaptation of Robopocalypse.

What are your predictions for the future of ML and Statistics?

18 Comments

  1. Posted October 30, 2012 at 8:45 pm | Permalink

    Here’s a link to the product release of ARM in terms of what’ll be available in about 2 years from now. Basically they’re targeting the server center using relatively smallish processor cores but lots of them: http://www.engadget.com/photos/arm-cortex-a50-series-official-slides/

  2. Posted October 31, 2012 at 6:00 am | Permalink

    Larry,

    I have a somewhat different take on predicting the future in http://nuit-blanche.blogspot.com/2012/08/predicting-future-steamrollers.html

    I need to write part 3 🙂

    But the real important question is: will there be a mini-me version of Professor Wasserman ?

    Igor.

  3. Posted October 31, 2012 at 6:27 am | Permalink

    I asked some researchers, including Shane Legg, about risks associated with artificial general intelligence. Their answers can be found here: http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI

  4. Keith O'Rourke
    Posted October 31, 2012 at 9:30 am | Permalink

    Larry: If its adapted for a Broadway musical would you be interested in the role?

    My old bass, Detsky now produces Broadway musicals and I think you worked with him once?

    http://www.broadway.com/buzz/awards/tony-awards/nominees/241/jesus-christ-superstar/

    Allways helsp to know people

    • Posted October 31, 2012 at 11:33 am | Permalink

      I prefer Hollywood

      • Keith O'Rourke
        Posted October 31, 2012 at 3:53 pm | Permalink

        Understand – look forward, on this very evening in years to come, to many kids arriving at the door in Prof Wasserman costumes …

        Are you sure you want the part?

        It is quite a complement.

  5. Posted October 31, 2012 at 12:49 pm | Permalink

    On a lighter side, had a chance to have twitter conversation with Daniel Wilson on the interesting rumor/prediction. Here is his response.

    https://twitter.com/danielwilsonpdx/status/263675260082192384

    https://twitter.com/danielwilsonpdx/status/263682102246137856

  6. Posted October 31, 2012 at 1:32 pm | Permalink

    Glad to hear there is a felt need/desire for the future of systematic learning to place greater emphasis on the processes of developing new hypotheses, learning from experimental manipulations, etc. To this end, people might need to move away from familiar conceptions of the nature of creative human learning, and I wonder if this will happen.

  7. Aarti Singh
    Posted October 31, 2012 at 7:26 pm | Permalink

    Thanks for the post, Larry. I already had a couple of inquiries about whether we recorded the panel discussion or had a summary post of it – now I can just point them to your blog!

    Can you also comment on what advances you think are necessary in statistics that will facilitate “computers that learn in ways closer to the ways humans learn”?

  8. Jotaf
    Posted November 1, 2012 at 9:32 pm | Permalink

    I think some of the most exciting developments may come from the perception that now we might have the tools to tackle the most frustrating challenges of “classical” AI, beyond machine learning (though there’s much to be done in this area obviously). An interesting direction in this regard is teaching deep networks with auxiliary tasks; Léon Bottou makes a good case for why this may be true (see “From Machine Learning to Machine Reasoning” on arXiv).

    On the other hand, it’s pretty scary that NELL lists “bloodletting is a hobby” as one of its high-confidence beliefs. Please help vote this belief down, or the human race is doomed!!
    http://rtw.ml.cmu.edu/rtw/kbbrowser/hobby:bloodletting

  9. Entsophy
    Posted November 2, 2012 at 8:34 pm | Permalink

    The future of ML is AI? I guess the future is exactly what it used to be.

    • Corey
      Posted November 3, 2012 at 11:11 am | Permalink

      Are you saying you don’t take Jaynes at his word when he writes that he ‘s showing how to program a robot to do plausible reasoning?

  10. Phong
    Posted November 22, 2012 at 1:01 pm | Permalink

    Steve Speilberg is making a movie based on the book, to be released April 25 2104.Yeah, it’s right when you haven’t got any call, because now is 2012. 93 years to come. 😀

  11. Posted December 7, 2012 at 2:09 pm | Permalink

    I love this blog. Great blog.

23 Trackbacks

  1. By Alexander Kruel · DeepMind Technologies on October 31, 2012 at 8:35 am

    […] Larry Wasserman reports, […]

  2. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  3. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. As […]

  4. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  5. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  6. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  7. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  8. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  9. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. As […]

  10. […] 2012, Carnegie Mellon teacher Larry Wasserman composed that the “startup is attempting to construct a system that thinks. This was the original […]

  11. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. As […]

  12. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. As […]

  13. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. As […]

  14. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  15. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. As […]

  16. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  17. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. As […]

  18. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that a “startup is perplexing to build a complement that thinks. This was a strange dream of AI. […]

  19. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  20. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  21. […] 2012, Carnegie Mellon professor Larry Wasserman wrote that the “startup is trying to build a system that thinks. This was the original dream of AI. […]

  22. […] un algorithme présenté sur leur site comme « générique et multifonctionnel ». Le professeur Larry Wasserman, de l’université de Carnegie Mellon, explique plus concrètement que DeepMind tente de […]

  23. […] Larry Waserman 说,这是一家创业公司,致力于构建会思考的人工智能系统。DeepMind […]