Few topics have the power to capture the imaginations of computer science undergrads like machine learning. And for many of us, genetic algorithms in particular hold seemingly infinite promise. During your first year or two at school you’re on the verge of solving all of the worlds problems using creatures pulled from your mind and made real by your trusty computer.
But then reality sets in, and we become jaded. The awesome AI in the game you’re making -that was going to learn to walk, run, and ultimately outpace the player- just keeps finding physics bugs and getting stuck in walls. And the bots you wrote to manage a simulated hedge fund keep triggering massive selloffs and bringing down the whole virtual economy.
By the time we leave school we’ve given up on genetic algorithms. “Cute,” we say, “but too opaque and unwieldy to be used in the real world.”
Well, maybe our little creatures don’t do what we want. Maybe they just like to misbehave – and maybe that’s ok. By embracing the stochastic nature of genetic algorithms we can turn them into some of the best validation tools we have. Let loose, these algorithms might try things that would never occur to us humans, and they might expose terrible bugs along the way.
So instead of trying to cultivate an AI just like a human player, try changing your fitness function to reward something odd, like collision with walls. And maybe bots that make a bunch of money are overrated, perhaps there’s something interesting to be learned from bots that try to cause massive market fluctuations.
I think the real promise of genetic algorithms is in breaking things, not in playing nicely within their virtual boxes.