tailieunhanh - Artificial Intelligence - Lecturer 6: Advanced search methods

Typically we can calculate a probability for each possible deal Seems just like having one big dice roll at the beginning of the game; idea: compute the minimax value of each action in each deal, then choose the action with highest expected value over all deals; special case: if an action is optimal for all deals, it's optimal. | Artificial Intelligence For HEDSPI Project Lecturer 6 - Advanced search methods Lecturers Dr. Le Thanh Huong Dr. Tran Due Khanh Dr. Hai V. Pham HUST Outline Local beam search Game and search Alpha-beta pruning 1 Local beam search Like greedy search but keep K states at all times Initially k random states Next determine all successors of k states If any of successors is goal - finished Else select k best from successors and repeat. Greedy Search Beam Search Local beam search Major difference with random-restart search Information is shared among k search threads If one state generated good successor but others did not come here the grass is greener Can suffer from lack of diversity. Stochastic variant choose k successors at proportionally to state success. The best choice in MANY practical settings 2 Games and search Why study games Why is search a good idea Majors assumptions about games Only an agent s actions change the world World is deterministic and accessible Why study games May 1997 Deep Blue - Garry Kasparov - machines are better than humans in Othello humans are better than machines in go here perfect information zero-sum games

TỪ KHÓA LIÊN QUAN