Artificial general intelligence: The domain of the patient, philosophical coder | Ben Goertzel
Articles,  Blog

Artificial general intelligence: The domain of the patient, philosophical coder | Ben Goertzel


My cousin who lives in Hong Kong is a game
programmer, and he loves what I’m doing but he just tells me when we discuss it, “I
need immediate gratification.” And he codes something and he sees a game
character do something cool, right? And if you need that, if you really need to
see something cool happen every day AGI is not for you. In AGI you may work six months and nothing
interesting happens. And then something really interesting happens. So I think if someone doesn’t have that
kind of stubborn, pigheaded persistence I will tend to employ them doing, for example,
data analysis, because that gives immediate gratification. You get a data set from a customer, you run
a machine learning algorithm on it and you get a result which is interesting. The customer is happy. Then you go on to the next data set. And if you explain the different types of
work available actually most people are pretty good at choosing what won’t drive them crazy. So some people are like “Yeah, I want to
do stuff that seems cool every day.” And other people are like “Well, I really
want to understand how thinking works. I want to understand how cognition and vision
work together, and that’s much more interesting to me than applying an existing vision algorithm
to solve someone’s problem.” So I tend to throw the issue at the potential
employee or volunteer themselves, and sometimes that works, sometimes it doesn’t. But I trust them to know themselves better
than I know them anyway. There are many different types and levels
of problems that one encounters in doing AI work, and there are sort of low-level algorithmic
problems or software design problems which are solved via clever tricks. And then there are deeper problems, like how
do you architect a perception system? How should perception and cognition work together
in an AI system? If a system knows one language how do you
leverage that knowledge to help it learn another language? I find personally with these deeper problems
this is the kind of thing you solve while you were like walking the dog in the forest
or taking a shower or driving down the highway or something. And it seems to be that the people who make
headway on these deeper problems have the personality type that carries the problem
in their head all the time. Like you’ll think about this thing when
you go to sleep, you’re still thinking about it when you woke up, and you just keep chewing
on this issue a hundred times a day. It could be for days or weeks or years, or
even decades. And then the solution pops up for you. And not everyone has the inclination or personality
to be obsessive at sort of keeping a problem like an egg in your mind, in your focus, until
the solution hatches out. But that’s a particular cognitive style
or habit or skill which I see in everyone I know who’s really making headway on the
AGI problem.

44 Comments

  • blue_tetris

    It's refreshing knowing that there are CEOs out there who understand how technologies work, care about futurism, and can talk at length about something other than their successful regulatory capture. I doubt a guy like this will last long in the modern marketplace.

  • 2LegHumanist

    If this guy is a serious researcher, where does he find the time to write so many futurism books? Futurism is not AI BTW. Nothing to do with it. It's fantasy, singularity bullshit designed to extract money from people who lack the critical thinking skills to identify bullshit.

  • Erik S

    When I first saw this guy, the term "ALIENS!" came into my mind, a total charlatan crackpot… And then everything he said came true. So he's clearly another eccentric genius like Jaron Lanier. He does have a good point about some things needing to be mulled on and thought over for weeks before the solution just pops into your head. Great artists and writers know this feeling too, when something just bursts into your mind like a flood.

  • princeofexcess

    Working on AI does have immediate gratification but the immediate gratification is not that your whole project works. The immediate gratification is that you learn something new or program a cool feature (that sometimes doesn't improve anything at all but at least you solved a small problem)
    Just like in most other programming.
    In fact anything that is complicated lacks the type of immediate gratification where you get immediate rewards for small amount of work

  • Izumi Laryukov

    Hi Ben! What do you think about this line of thought. Am I just being a silly goose?https://www.reddit.com/user/izumi3682/comments/9786um/but_whats_my_motivation_artificial_general/

  • jax10x

    people expect all big thinks should come from a charismatic speaker like in the movies. That's why they fall for charismatic frauds. This guy is telling us how to think about AIG.

  • W1llums

    This guy may look goofy, but he appears to be one of the most productive and highly qualified people working in AGI. Mathematics PhD and chairman and key technical persona of several AGI and AI companies.

  • stephan verbeeck

    so true, but if you then stop for a while and look back over the years then you also get that "Forest Gump feeling" (that you have been running and nobody was able to keep up and you are alone at the bleeding edge of technology) and that is gratifying too because of all the feeling that you yourself have that you are not getting ahead; that is just your over ambitious unrealistic schedule expectations caused by doing it all yourself.

  • Elise Bickel

    This video is so informative. SingularityNet's CEO knows how their product, and services work, he even explained it very well. Hands down to this man!

  • Dashiell Cleg

    I understand how do you think but you just can't please everybody.
    You see most of the people wants day by day excitement and result.
    You say that AGI is not for that kind of people, how about the investors like us?
    We are the type of investors who is frustrated to see some good result from time to time basis because
    we believe on this project that will give us something.

  • Xavier Bailey

    I loved this video. We cannot stop this man from researching and creating AI. Very well done focusing on real topics instead of just jumping into the usual AI topics out there. Intelligent people will see positive things about this video, but looking at the dimwit comments here where people criticise AI and Dr. Ben, I must say that artificial Intelligence are created to fill the community with smart, wise and rational minds. I guess if Dr. Ben spoke their mind, their be speechless. AI will play an important role in computer and machine development and we should all be thankful that someone like Dr. Ben created something very helpful.

  • Dhanne1

    This talk is on MBTI personalities. His cousin has likely an ENTP personality as Ben has an INTP personality. ENTPs tends to have a wide variety of brief intrests whereas INTPs ponder truths deeply and wholeheartedly on fewer topics.

    As an ENTP myself, I can summarise: INTPs likely the ones who invent the AGI.

  • Ron Villejo

    Some people want to know what time it is; other people want to know how time works. They're different personalities. The latter personality may be required, in order to advance our understanding of AGI.

Leave a Reply

Your email address will not be published. Required fields are marked *