Why are computer not better than they are at bridge?
#41
Posted 2015-April-01, 16:24
As we understand the human brain in more depth and how it learns perhaps this ability to learn will be transferable to a computer chip. At some point perhaps the computer will be creative enough to develop a learning process that is distinctly nonhuman but is a better fit.
If you accept that computers can be intelligent, in other words there is nothing in the laws of science that prevent computers from gaining intelligence this should be possible. Please note I define and measure the term intelligence in some generally accepted manner.
#42
Posted 2015-April-01, 16:28
johnu, on 2015-April-01, 14:35, said:
There are relatively simple ways to get around this limitation.
For example, suppose that two pairs of computer programs are competing against one another in a bridge match.
Instead of basing disclosure around a natural language description of the various bids have the program generate 100,000 hands that are consistent with this bid/this auction.
#43
Posted 2015-April-01, 20:09
This feeling/force/cause Will be used to "understand" bye experience!
Aplying this at bridge then I understand that the thinking, analyse "resultl" in a good cardreading and affect our "result", and the computer Will then "hold IT down to =IOM
#44
Posted 2015-April-01, 21:41
But I will not, like the OP, remain in denial.
#45
Posted 2015-April-02, 01:00
johnu, on 2015-April-01, 14:35, said:
That's simply not true. Note for the remainder of the post that I'm only talking about play.
In the perfect world of computer bridge, the computer would make a single dummy simulation of ALL possible hands for each seat and base its play on that. The result should be the percentage play, and that's the best a computer can become. At the moment we only make a few double dummy simulations and that's where it can go wrong. Making singe dummy simulations requires a lot more computing power than double dummy simulations.
The total number of deals is 53,644,737,765,488,792,839,237,440,000 (5.36E28) without taking vulnerability and dealer into account. Once dummy is visible you see 27 cards so there are 5200300 hands left for dummy and declarer's RHO to evaluate. After RHO has played you have 2704156 hands left for declarer. As you can see, the number of possible deals decreases extremely fast. But this is DD! Suppose you can do 1 million DD analyses per second, then declarer needs 5 seconds to play his first card which is acceptable. When you need SD simulations however, the amount of deals is much bigger, in an exponential way. Even if you'd only have to calculate a million times as much, do you know how much time 1 million seconds is? It's 11.5 days! Note that we can't evaluate 1 million DD hands per second, it's more like 1000 or so, so in reality you'd have to wait for 32 years for declarer to play his first card... Lack of computing power? Hell yeah!
Similar to chess programs: they lack computing power to calculate each and every possible move. Therefore they use algorithms and heuristics to determine which moves to analyse and how a certain position needs to be valued. With enough computing power you wouldn't need to do this and have a perfect chess machine. The game tree would be gigantic, require enormous amounts of memory, and have only 3 possible ratings: W wins, B wins, or draw.
The current bridge programs work in a similar way by making simulations, but they can't simulate every deal. So their quality depends on the quality of the simulations, and if they're good, then obviously the program is able to deliver quite good results already without having to simulate every possibility. More quality simulations give us a better result, and for this you need more computing power.
#46
Posted 2015-April-02, 01:04
On most forums you receive ONE email notification for each thread you're following, and you don't receive another one until you have visited the thread. Very sensible and very obvious.
Thanks.
D.
#47
Posted 2015-April-02, 01:11
Dinarius, on 2015-April-02, 01:04, said:
On most forums you receive ONE email notification for each thread you're following, and you don't receive another one until you have visited the thread. Very sensible and very obvious.
Thanks.
D.
Go yo your account-->Edit my profile--->Settings---->Notifications and set your notifications to the desired.
"It's only when a mosquito lands on your testicles that you realize there is always a way to solve problems without using violence!"
"Well to be perfectly honest, in my humble opinion, of course without offending anyone who thinks differently from my point of view, but also by looking into this matter in a different perspective and without being condemning of one's view's and by trying to make it objectified, and by considering each and every one's valid opinion, I honestly believe that I completely forgot what I was going to say."
#48
Posted 2015-April-02, 02:46
Free, on 2015-April-02, 01:00, said:
What you fail to understand is that computers don't have to be perfect, or even close to perfect to be world class, you just have to play as well as the best human players. What difference does it make if you could single dummy analyze 5200300 hands for 1 trick? Not every hand is equally likely]. The world class player will eliminate a huge percentage of them based on the bidding or lack of, the choice of suit led, the card led, signals, discards, etc. Human players obviously can't do millions of computations but the best are viewed as world class. If humans don't need to do all those millions of computations to be world class, neither do computers. What they do need to do is reproduce or improve on the reasoning that humans do when planning the play whether as declarer or on defense. IMO, that is a reasonable and achievable goal but it will require some top programmers to spend a lot of time coding.
I do have problems with the way double dummy results are used. Randomly playing high cards including honors when the double dummy analyzer says that it doesn't make any difference double dummy. This would include things like leading a singleton king of trump on the assumption that it will be dropped offside on a double dummy play, randomly playing a minor honor on the assumption that declarer will drop it or find it on a 2 way finesse. Placing too much reliance on the opponents bidding and not making a 100% play that doesn't depend on the bidding. These types of things need to be tweeked in the programming. The main problem with single or double dummy results is garbage in - garbage out. If you aren't modeling the right types of hands, your results aren't worth much. That means analyzing the bidding and previous play has to be better and more flexible.
#49
Posted 2015-April-02, 04:29
Free, on 2015-April-02, 01:00, said:
In the perfect world of computer bridge, the computer would make a single dummy simulation of ALL possible hands for each seat and base its play on that.
I agree with your first statement but not with the next. Take chess computer development as a model. Modern computers store solutions for all endgames with 6 pieces and some with 7. Bridge could be "solved" in a similar way by storing "endgame" positions for a given number of cards. With enough computing power these could, in theory at least, be extended all the way up to 13 cards without simulations being required. That would be the perfect world. Of course that would still not a solution to the game as it does not take into account opposition bidding but it would be a pretty good starting point.
#51
Posted 2015-April-02, 05:37
StevenG, on 2015-April-02, 05:22, said:
SuitPlay goes (as I understand it) some way towards "solving" bridge by assuming that defenders follow an optimal mixed strategy. In priciple this could be generalized to the whole hand (not just one suit) and to assuming that all three players follow an optimal mixed strategy.
I don't say it is easy - declarer's play depends on entry restrictions, avoidance play etc and defender's play depend on the extend to which they know these things and even the extent to which they know whether their partner knows about it etc.
#52
Posted 2015-April-02, 06:25
johnu, on 2015-April-02, 02:46, said:
The 5200300 is just a number to give an indication of the magnitude of hands, and indeed not all hands are possible due to the bidding. Nevertheless, if there's a fraction of them left and you need to do single dummy analyses for 4 hands, you'll still require huge amounts of computing power. Limiting the amount of simulations is the only option with a lack of computing power and can still result in decent performance. Using algorythms to eliminate hands is an obvious choice to gain 'speed' and improve the number of quality-simulations, similar to chess ignoring ridiculous moves to focus calculations on serious moves. You will indeed get a decent improvement. But in case of computer bridge, more IS better, hence my argument against the claim that we have enough computing power.
Chess is actually a nice example where we lack computing power to "solve" the game, and that's also the reason why it can still be beaten from time to time! With enough computing power to actually solve the game, a chess computer would become better than world class chess players. Even a team of grand masters wouldn't be able to win (unless the game isn't neutral)! This can't be said by bridge, because bridge isn't an exact science with complete information. The optimal result you can get in bridge is a percentage play, and that will fail on many occasions. Playing perfect percentage lines all the time won't guarantee that you'll win.
#53
Posted 2015-April-02, 13:40
"Given the current speed of progress, industry experts estimate that supercomputers will reach 1 EFLOPS (1018, one quintillion FLOPS) by 2018" 10(18)FLoating point Operations Per Second),
Just how much more powerful do computers need to become to do these simulations inside simulations to beat wc in bridge? If we do not have enough, what is enough, what is the number?
#54
Posted 2015-April-02, 14:03
#55
Posted 2015-April-02, 14:55
Lets say that a human based on what happened (ie a conclusion) can bee almoast sure on that for example IF having 11 trumf IT is 2-0 and therfore feel almoast sure on that it is right to finess. Then no computerforce Will be to Amy help for the computer.
#56
Posted 2015-April-02, 21:30
PhantomSac, on 2015-April-02, 14:03, said:
I have to agree with a number of points, including this one.
I loved Hrothgar's idea for explaining systems to another bot.
Even a tiny company like BBO has access to a lot of computational power. It just takes money, and the ability within the software to split up the problem (say, a simulation) so that you can have some large number of computers work on it at the same time.
I think there are several reasons why bots aren't better, tho I can only speak to our GIB. Just my opinions. I'm not an expert on the guts of this stuff, tho I tinker.
1. The lack of a meaningful incentive (or funding) to improve bots. I don't know why IBM developed Deep Blue. It couldn't have been cheap. What did they get out of it? In our world, who's interested/rich enough to fund this sort of development, and why would they?
2. The lack of a meaningful database or language to explain the meanings of calls ( or, if you prefer, the lack of a database that can explain what bid to make with some hand after some auction )
Sure, we add rules constantly. But it is like, as Georgi says, boiling the ocean. In GIB's particular case the language used to express the rules is quite difficult to work with and is limited in many ways. But making the developers of bot software also responsible for defining some sane system ( completely ) makes it harder.
There are no useful reference materials when trying to define stray sequences.
3. The difficulty of reading meaning into partners (opponents) actions on defence
4. Using double-dummy simulations instead of single dummy simulations
5. Limiting the computing resources ( time spent, if u prefer, on a single computer ) that the robot has access to per decision
6. The difficulty of trying to figure out what the trick target should be on any hand
There are others but those are the main ones that come to mind.
For GIB, the database of bids is the main technical obstacle, since flaws there cascade into other areas like card play and defence. But I think the main non-technical reason is mostly the lack of resources/incentive.
U
#57
Posted 2015-April-02, 22:11
For some reason members seem to think we need to teach the bots...no....let them learn.
computing power needs to be cheap, very cheap. I mean hardware and software
----------
If I understand Barmar and my words not his....computing power remains too expensive in 2015 But at least give us a number.....you guys do not give ua a number... a goal.
give it time..
I advocated hologram bbo bridge many years ago and still await it. I mean FULL hologram not cheap version.
#58
Posted 2015-April-03, 11:14
uday, on 2015-April-02, 21:30, said:
1. The lack of a meaningful incentive (or funding) to improve bots. I don't know why IBM developed Deep Blue. It couldn't have been cheap. What did they get out of it? In our world, who's interested/rich enough to fund this sort of development, and why would they?
Watson beat Jennings in Jeopardy. Computers are learning to think like humans. Do these top bridge playing programs really use lots of simulations? Humans don't use simulations to play hands.
#59
Posted 2015-April-03, 19:06
jogs, on 2015-April-03, 11:14, said:
Humans also do not analyse chess in a brute force way, yet it was the switch to this from trying to emulate human thinking that caused the first big step forward in chess computer design. What works best for a computer is not necesarily the same as for a human.
#60
Posted 2015-April-04, 09:08